2023-07-21 11:16:57,779 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1 2023-07-21 11:16:57,796 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-21 11:16:57,814 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 11:16:57,815 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1, deleteOnExit=true 2023-07-21 11:16:57,815 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 11:16:57,816 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/test.cache.data in system properties and HBase conf 2023-07-21 11:16:57,817 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 11:16:57,817 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.log.dir in system properties and HBase conf 2023-07-21 11:16:57,818 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 11:16:57,818 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 11:16:57,819 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 11:16:57,937 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-21 11:16:58,294 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 11:16:58,300 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 11:16:58,300 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 11:16:58,300 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 11:16:58,301 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 11:16:58,301 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 11:16:58,302 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 11:16:58,302 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 11:16:58,303 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 11:16:58,303 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 11:16:58,303 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/nfs.dump.dir in system properties and HBase conf 2023-07-21 11:16:58,304 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/java.io.tmpdir in system properties and HBase conf 2023-07-21 11:16:58,304 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 11:16:58,305 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 11:16:58,305 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 11:16:58,865 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 11:16:58,870 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 11:16:59,112 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-21 11:16:59,266 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-21 11:16:59,278 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:16:59,316 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:16:59,359 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/java.io.tmpdir/Jetty_localhost_localdomain_33307_hdfs____.rw5o98/webapp 2023-07-21 11:16:59,472 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:33307 2023-07-21 11:16:59,482 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 11:16:59,482 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 11:16:59,944 WARN [Listener at localhost.localdomain/38415] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:17:00,019 WARN [Listener at localhost.localdomain/38415] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 11:17:00,034 WARN [Listener at localhost.localdomain/38415] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:17:00,040 INFO [Listener at localhost.localdomain/38415] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:17:00,043 INFO [Listener at localhost.localdomain/38415] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/java.io.tmpdir/Jetty_localhost_46613_datanode____i4gdml/webapp 2023-07-21 11:17:00,143 INFO [Listener at localhost.localdomain/38415] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46613 2023-07-21 11:17:00,572 WARN [Listener at localhost.localdomain/46025] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:17:00,663 WARN [Listener at localhost.localdomain/46025] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 11:17:00,668 WARN [Listener at localhost.localdomain/46025] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:17:00,671 INFO [Listener at localhost.localdomain/46025] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:17:00,679 INFO [Listener at localhost.localdomain/46025] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/java.io.tmpdir/Jetty_localhost_38535_datanode____.1gzu4n/webapp 2023-07-21 11:17:00,844 INFO [Listener at localhost.localdomain/46025] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38535 2023-07-21 11:17:00,900 WARN [Listener at localhost.localdomain/36791] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:17:00,985 WARN [Listener at localhost.localdomain/36791] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 11:17:00,998 WARN [Listener at localhost.localdomain/36791] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:17:01,001 INFO [Listener at localhost.localdomain/36791] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:17:01,028 INFO [Listener at localhost.localdomain/36791] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/java.io.tmpdir/Jetty_localhost_40423_datanode____ak3heo/webapp 2023-07-21 11:17:01,171 INFO [Listener at localhost.localdomain/36791] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40423 2023-07-21 11:17:01,212 WARN [Listener at localhost.localdomain/38409] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:17:01,329 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x62209ac394138906: Processing first storage report for DS-439a2015-2672-456c-b982-719bc01aa0de from datanode 8f07919f-8e51-45e6-bdb5-7bdfad95dc80 2023-07-21 11:17:01,331 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x62209ac394138906: from storage DS-439a2015-2672-456c-b982-719bc01aa0de node DatanodeRegistration(127.0.0.1:43969, datanodeUuid=8f07919f-8e51-45e6-bdb5-7bdfad95dc80, infoPort=35413, infoSecurePort=0, ipcPort=36791, storageInfo=lv=-57;cid=testClusterID;nsid=2128313266;c=1689938218944), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 11:17:01,331 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x983bf5bdc007fb41: Processing first storage report for DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3 from datanode 759d3d8d-aad3-4b98-b90a-bcd18ad3f73f 2023-07-21 11:17:01,331 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x983bf5bdc007fb41: from storage DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3 node DatanodeRegistration(127.0.0.1:37605, datanodeUuid=759d3d8d-aad3-4b98-b90a-bcd18ad3f73f, infoPort=42151, infoSecurePort=0, ipcPort=46025, storageInfo=lv=-57;cid=testClusterID;nsid=2128313266;c=1689938218944), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:01,332 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x62209ac394138906: Processing first storage report for DS-37cec94d-3199-486b-b38f-82864c578fdc from datanode 8f07919f-8e51-45e6-bdb5-7bdfad95dc80 2023-07-21 11:17:01,332 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x62209ac394138906: from storage DS-37cec94d-3199-486b-b38f-82864c578fdc node DatanodeRegistration(127.0.0.1:43969, datanodeUuid=8f07919f-8e51-45e6-bdb5-7bdfad95dc80, infoPort=35413, infoSecurePort=0, ipcPort=36791, storageInfo=lv=-57;cid=testClusterID;nsid=2128313266;c=1689938218944), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:01,332 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x983bf5bdc007fb41: Processing first storage report for DS-c66e3c1f-8bdc-4e97-998b-09c96ad92160 from datanode 759d3d8d-aad3-4b98-b90a-bcd18ad3f73f 2023-07-21 11:17:01,332 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x983bf5bdc007fb41: from storage DS-c66e3c1f-8bdc-4e97-998b-09c96ad92160 node DatanodeRegistration(127.0.0.1:37605, datanodeUuid=759d3d8d-aad3-4b98-b90a-bcd18ad3f73f, infoPort=42151, infoSecurePort=0, ipcPort=46025, storageInfo=lv=-57;cid=testClusterID;nsid=2128313266;c=1689938218944), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:01,364 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf0b49cf2ea4f673a: Processing first storage report for DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f from datanode d4d4284a-5481-46a8-929f-860ef8c6abc4 2023-07-21 11:17:01,364 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf0b49cf2ea4f673a: from storage DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f node DatanodeRegistration(127.0.0.1:45611, datanodeUuid=d4d4284a-5481-46a8-929f-860ef8c6abc4, infoPort=33957, infoSecurePort=0, ipcPort=38409, storageInfo=lv=-57;cid=testClusterID;nsid=2128313266;c=1689938218944), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 11:17:01,364 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf0b49cf2ea4f673a: Processing first storage report for DS-2ab161d0-b18d-453e-bfcb-e30e73051887 from datanode d4d4284a-5481-46a8-929f-860ef8c6abc4 2023-07-21 11:17:01,364 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf0b49cf2ea4f673a: from storage DS-2ab161d0-b18d-453e-bfcb-e30e73051887 node DatanodeRegistration(127.0.0.1:45611, datanodeUuid=d4d4284a-5481-46a8-929f-860ef8c6abc4, infoPort=33957, infoSecurePort=0, ipcPort=38409, storageInfo=lv=-57;cid=testClusterID;nsid=2128313266;c=1689938218944), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:01,703 DEBUG [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1 2023-07-21 11:17:01,800 INFO [Listener at localhost.localdomain/38409] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1/zookeeper_0, clientPort=63555, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 11:17:01,818 INFO [Listener at localhost.localdomain/38409] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63555 2023-07-21 11:17:01,829 INFO [Listener at localhost.localdomain/38409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:01,832 INFO [Listener at localhost.localdomain/38409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:02,563 INFO [Listener at localhost.localdomain/38409] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6 with version=8 2023-07-21 11:17:02,563 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/hbase-staging 2023-07-21 11:17:02,572 DEBUG [Listener at localhost.localdomain/38409] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 11:17:02,573 DEBUG [Listener at localhost.localdomain/38409] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 11:17:02,573 DEBUG [Listener at localhost.localdomain/38409] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 11:17:02,573 DEBUG [Listener at localhost.localdomain/38409] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 11:17:02,987 INFO [Listener at localhost.localdomain/38409] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-21 11:17:03,527 INFO [Listener at localhost.localdomain/38409] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:03,594 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:03,595 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:03,595 INFO [Listener at localhost.localdomain/38409] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:03,595 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:03,596 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:03,756 INFO [Listener at localhost.localdomain/38409] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:03,860 DEBUG [Listener at localhost.localdomain/38409] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-21 11:17:03,990 INFO [Listener at localhost.localdomain/38409] ipc.NettyRpcServer(120): Bind to /136.243.18.41:40703 2023-07-21 11:17:04,006 INFO [Listener at localhost.localdomain/38409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:04,010 INFO [Listener at localhost.localdomain/38409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:04,049 INFO [Listener at localhost.localdomain/38409] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40703 connecting to ZooKeeper ensemble=127.0.0.1:63555 2023-07-21 11:17:04,117 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:407030x0, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:04,121 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40703-0x101879855f50000 connected 2023-07-21 11:17:04,165 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(164): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:04,166 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(164): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:04,171 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(164): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:04,179 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40703 2023-07-21 11:17:04,180 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40703 2023-07-21 11:17:04,180 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40703 2023-07-21 11:17:04,181 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40703 2023-07-21 11:17:04,181 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40703 2023-07-21 11:17:04,223 INFO [Listener at localhost.localdomain/38409] log.Log(170): Logging initialized @7255ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-21 11:17:04,380 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:04,381 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:04,383 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:04,386 INFO [Listener at localhost.localdomain/38409] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 11:17:04,386 INFO [Listener at localhost.localdomain/38409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:04,386 INFO [Listener at localhost.localdomain/38409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:04,391 INFO [Listener at localhost.localdomain/38409] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:04,472 INFO [Listener at localhost.localdomain/38409] http.HttpServer(1146): Jetty bound to port 39495 2023-07-21 11:17:04,474 INFO [Listener at localhost.localdomain/38409] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:04,513 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:04,518 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2014df18{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:04,519 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:04,520 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@47667999{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:04,713 INFO [Listener at localhost.localdomain/38409] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:04,729 INFO [Listener at localhost.localdomain/38409] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:04,730 INFO [Listener at localhost.localdomain/38409] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:04,732 INFO [Listener at localhost.localdomain/38409] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:17:04,739 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:04,762 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@53796997{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/java.io.tmpdir/jetty-0_0_0_0-39495-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3193408285483571376/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 11:17:04,778 INFO [Listener at localhost.localdomain/38409] server.AbstractConnector(333): Started ServerConnector@695e75c2{HTTP/1.1, (http/1.1)}{0.0.0.0:39495} 2023-07-21 11:17:04,778 INFO [Listener at localhost.localdomain/38409] server.Server(415): Started @7810ms 2023-07-21 11:17:04,784 INFO [Listener at localhost.localdomain/38409] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6, hbase.cluster.distributed=false 2023-07-21 11:17:04,879 INFO [Listener at localhost.localdomain/38409] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:04,879 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:04,879 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:04,879 INFO [Listener at localhost.localdomain/38409] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:04,880 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:04,880 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:04,885 INFO [Listener at localhost.localdomain/38409] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:04,888 INFO [Listener at localhost.localdomain/38409] ipc.NettyRpcServer(120): Bind to /136.243.18.41:46255 2023-07-21 11:17:04,891 INFO [Listener at localhost.localdomain/38409] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:17:04,900 DEBUG [Listener at localhost.localdomain/38409] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:17:04,901 INFO [Listener at localhost.localdomain/38409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:04,904 INFO [Listener at localhost.localdomain/38409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:04,907 INFO [Listener at localhost.localdomain/38409] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46255 connecting to ZooKeeper ensemble=127.0.0.1:63555 2023-07-21 11:17:04,917 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:462550x0, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:04,918 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46255-0x101879855f50001 connected 2023-07-21 11:17:04,919 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(164): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:04,921 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(164): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:04,922 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(164): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:04,922 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46255 2023-07-21 11:17:04,922 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46255 2023-07-21 11:17:04,924 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46255 2023-07-21 11:17:04,924 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46255 2023-07-21 11:17:04,925 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46255 2023-07-21 11:17:04,927 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:04,928 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:04,928 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:04,929 INFO [Listener at localhost.localdomain/38409] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:17:04,930 INFO [Listener at localhost.localdomain/38409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:04,930 INFO [Listener at localhost.localdomain/38409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:04,930 INFO [Listener at localhost.localdomain/38409] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:04,933 INFO [Listener at localhost.localdomain/38409] http.HttpServer(1146): Jetty bound to port 37737 2023-07-21 11:17:04,933 INFO [Listener at localhost.localdomain/38409] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:04,952 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:04,953 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@315670d7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:04,953 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:04,954 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6fecfa89{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:05,082 INFO [Listener at localhost.localdomain/38409] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:05,084 INFO [Listener at localhost.localdomain/38409] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:05,084 INFO [Listener at localhost.localdomain/38409] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:05,084 INFO [Listener at localhost.localdomain/38409] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 11:17:05,085 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:05,088 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1f5e424d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/java.io.tmpdir/jetty-0_0_0_0-37737-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6786005720070512244/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:05,090 INFO [Listener at localhost.localdomain/38409] server.AbstractConnector(333): Started ServerConnector@70ea26d2{HTTP/1.1, (http/1.1)}{0.0.0.0:37737} 2023-07-21 11:17:05,090 INFO [Listener at localhost.localdomain/38409] server.Server(415): Started @8122ms 2023-07-21 11:17:05,106 INFO [Listener at localhost.localdomain/38409] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:05,107 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:05,107 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:05,107 INFO [Listener at localhost.localdomain/38409] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:05,108 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:05,108 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:05,108 INFO [Listener at localhost.localdomain/38409] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:05,113 INFO [Listener at localhost.localdomain/38409] ipc.NettyRpcServer(120): Bind to /136.243.18.41:36863 2023-07-21 11:17:05,115 INFO [Listener at localhost.localdomain/38409] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:17:05,132 DEBUG [Listener at localhost.localdomain/38409] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:17:05,134 INFO [Listener at localhost.localdomain/38409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:05,137 INFO [Listener at localhost.localdomain/38409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:05,139 INFO [Listener at localhost.localdomain/38409] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36863 connecting to ZooKeeper ensemble=127.0.0.1:63555 2023-07-21 11:17:05,147 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:368630x0, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:05,149 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(164): regionserver:368630x0, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:05,151 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(164): regionserver:368630x0, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:05,152 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(164): regionserver:368630x0, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:05,153 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36863-0x101879855f50002 connected 2023-07-21 11:17:05,160 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36863 2023-07-21 11:17:05,163 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36863 2023-07-21 11:17:05,163 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36863 2023-07-21 11:17:05,164 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36863 2023-07-21 11:17:05,170 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36863 2023-07-21 11:17:05,173 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:05,174 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:05,174 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:05,175 INFO [Listener at localhost.localdomain/38409] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:17:05,175 INFO [Listener at localhost.localdomain/38409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:05,175 INFO [Listener at localhost.localdomain/38409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:05,176 INFO [Listener at localhost.localdomain/38409] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:05,177 INFO [Listener at localhost.localdomain/38409] http.HttpServer(1146): Jetty bound to port 32777 2023-07-21 11:17:05,177 INFO [Listener at localhost.localdomain/38409] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:05,184 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:05,185 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6f188e8d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:05,185 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:05,186 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3dbeab3b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:05,322 INFO [Listener at localhost.localdomain/38409] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:05,323 INFO [Listener at localhost.localdomain/38409] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:05,323 INFO [Listener at localhost.localdomain/38409] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:05,324 INFO [Listener at localhost.localdomain/38409] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 11:17:05,326 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:05,327 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@64a22c9a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/java.io.tmpdir/jetty-0_0_0_0-32777-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4770047906962400780/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:05,332 INFO [Listener at localhost.localdomain/38409] server.AbstractConnector(333): Started ServerConnector@53259b85{HTTP/1.1, (http/1.1)}{0.0.0.0:32777} 2023-07-21 11:17:05,333 INFO [Listener at localhost.localdomain/38409] server.Server(415): Started @8365ms 2023-07-21 11:17:05,359 INFO [Listener at localhost.localdomain/38409] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:05,359 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:05,360 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:05,360 INFO [Listener at localhost.localdomain/38409] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:05,360 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:05,360 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:05,360 INFO [Listener at localhost.localdomain/38409] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:05,366 INFO [Listener at localhost.localdomain/38409] ipc.NettyRpcServer(120): Bind to /136.243.18.41:33011 2023-07-21 11:17:05,366 INFO [Listener at localhost.localdomain/38409] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:17:05,377 DEBUG [Listener at localhost.localdomain/38409] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:17:05,378 INFO [Listener at localhost.localdomain/38409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:05,380 INFO [Listener at localhost.localdomain/38409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:05,383 INFO [Listener at localhost.localdomain/38409] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33011 connecting to ZooKeeper ensemble=127.0.0.1:63555 2023-07-21 11:17:05,399 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:330110x0, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:05,401 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33011-0x101879855f50003 connected 2023-07-21 11:17:05,402 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(164): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:05,403 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(164): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:05,404 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(164): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:05,412 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33011 2023-07-21 11:17:05,416 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33011 2023-07-21 11:17:05,420 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33011 2023-07-21 11:17:05,424 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33011 2023-07-21 11:17:05,425 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33011 2023-07-21 11:17:05,428 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:05,428 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:05,428 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:05,429 INFO [Listener at localhost.localdomain/38409] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:17:05,429 INFO [Listener at localhost.localdomain/38409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:05,429 INFO [Listener at localhost.localdomain/38409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:05,430 INFO [Listener at localhost.localdomain/38409] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:05,431 INFO [Listener at localhost.localdomain/38409] http.HttpServer(1146): Jetty bound to port 38855 2023-07-21 11:17:05,431 INFO [Listener at localhost.localdomain/38409] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:05,441 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:05,441 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4050af2a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:05,442 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:05,442 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@12e46771{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:05,581 INFO [Listener at localhost.localdomain/38409] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:05,583 INFO [Listener at localhost.localdomain/38409] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:05,583 INFO [Listener at localhost.localdomain/38409] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:05,584 INFO [Listener at localhost.localdomain/38409] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:17:05,588 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:05,589 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@f6af39f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/java.io.tmpdir/jetty-0_0_0_0-38855-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7420348332900344824/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:05,590 INFO [Listener at localhost.localdomain/38409] server.AbstractConnector(333): Started ServerConnector@65eae3e8{HTTP/1.1, (http/1.1)}{0.0.0.0:38855} 2023-07-21 11:17:05,590 INFO [Listener at localhost.localdomain/38409] server.Server(415): Started @8622ms 2023-07-21 11:17:05,613 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:05,620 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@27051488{HTTP/1.1, (http/1.1)}{0.0.0.0:34861} 2023-07-21 11:17:05,620 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(415): Started @8652ms 2023-07-21 11:17:05,621 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,40703,1689938222766 2023-07-21 11:17:05,630 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 11:17:05,632 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,40703,1689938222766 2023-07-21 11:17:05,659 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:05,659 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:05,660 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:05,660 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:05,661 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:05,664 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:17:05,667 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:17:05,669 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,40703,1689938222766 from backup master directory 2023-07-21 11:17:05,673 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,40703,1689938222766 2023-07-21 11:17:05,673 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 11:17:05,674 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:05,674 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,40703,1689938222766 2023-07-21 11:17:05,679 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-21 11:17:05,687 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-21 11:17:05,816 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/hbase.id with ID: af5cee8c-4392-4958-8708-9768a3b62dfe 2023-07-21 11:17:05,907 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:05,957 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:06,195 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2e0f64c4 to 127.0.0.1:63555 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:06,275 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6e27690b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:06,325 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:06,329 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 11:17:06,422 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-21 11:17:06,422 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-21 11:17:06,430 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:17:06,443 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:17:06,446 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:06,516 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/MasterData/data/master/store-tmp 2023-07-21 11:17:06,707 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:06,707 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 11:17:06,707 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:06,707 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:06,707 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 11:17:06,707 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:06,708 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:06,708 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:17:06,715 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/MasterData/WALs/jenkins-hbase17.apache.org,40703,1689938222766 2023-07-21 11:17:06,788 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C40703%2C1689938222766, suffix=, logDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/MasterData/WALs/jenkins-hbase17.apache.org,40703,1689938222766, archiveDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/MasterData/oldWALs, maxLogs=10 2023-07-21 11:17:06,894 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45611,DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f,DISK] 2023-07-21 11:17:06,894 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43969,DS-439a2015-2672-456c-b982-719bc01aa0de,DISK] 2023-07-21 11:17:06,894 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37605,DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3,DISK] 2023-07-21 11:17:06,910 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:17:07,014 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/MasterData/WALs/jenkins-hbase17.apache.org,40703,1689938222766/jenkins-hbase17.apache.org%2C40703%2C1689938222766.1689938226806 2023-07-21 11:17:07,015 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43969,DS-439a2015-2672-456c-b982-719bc01aa0de,DISK], DatanodeInfoWithStorage[127.0.0.1:45611,DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f,DISK], DatanodeInfoWithStorage[127.0.0.1:37605,DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3,DISK]] 2023-07-21 11:17:07,016 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:07,016 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:07,020 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:07,021 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:07,135 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:07,147 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 11:17:07,201 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 11:17:07,220 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:07,230 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:07,234 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:07,271 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:07,288 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:07,291 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10562487520, jitterRate=-0.016291692852973938}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:07,291 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:17:07,299 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 11:17:07,332 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 11:17:07,332 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 11:17:07,336 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 11:17:07,338 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-21 11:17:07,386 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 47 msec 2023-07-21 11:17:07,386 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 11:17:07,416 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 11:17:07,423 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 11:17:07,435 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 11:17:07,442 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 11:17:07,448 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 11:17:07,451 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:07,454 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 11:17:07,455 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 11:17:07,475 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 11:17:07,483 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:07,483 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:07,483 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:07,483 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:07,484 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:07,486 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,40703,1689938222766, sessionid=0x101879855f50000, setting cluster-up flag (Was=false) 2023-07-21 11:17:07,509 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:07,513 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 11:17:07,515 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,40703,1689938222766 2023-07-21 11:17:07,521 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:07,542 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 11:17:07,545 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,40703,1689938222766 2023-07-21 11:17:07,548 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.hbase-snapshot/.tmp 2023-07-21 11:17:07,638 INFO [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(951): ClusterId : af5cee8c-4392-4958-8708-9768a3b62dfe 2023-07-21 11:17:07,638 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(951): ClusterId : af5cee8c-4392-4958-8708-9768a3b62dfe 2023-07-21 11:17:07,673 DEBUG [RS:0;jenkins-hbase17:46255] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:17:07,676 INFO [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(951): ClusterId : af5cee8c-4392-4958-8708-9768a3b62dfe 2023-07-21 11:17:07,679 DEBUG [RS:2;jenkins-hbase17:33011] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:17:07,677 DEBUG [RS:1;jenkins-hbase17:36863] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:17:07,688 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 11:17:07,697 DEBUG [RS:2;jenkins-hbase17:33011] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:17:07,697 DEBUG [RS:2;jenkins-hbase17:33011] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:17:07,697 DEBUG [RS:1;jenkins-hbase17:36863] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:17:07,698 DEBUG [RS:1;jenkins-hbase17:36863] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:17:07,701 DEBUG [RS:0;jenkins-hbase17:46255] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:17:07,701 DEBUG [RS:0;jenkins-hbase17:46255] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:17:07,702 DEBUG [RS:2;jenkins-hbase17:33011] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:17:07,704 DEBUG [RS:1;jenkins-hbase17:36863] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:17:07,707 DEBUG [RS:0;jenkins-hbase17:46255] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:17:07,708 DEBUG [RS:2;jenkins-hbase17:33011] zookeeper.ReadOnlyZKClient(139): Connect 0x221ebde1 to 127.0.0.1:63555 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:07,711 DEBUG [RS:1;jenkins-hbase17:36863] zookeeper.ReadOnlyZKClient(139): Connect 0x3e121752 to 127.0.0.1:63555 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:07,718 DEBUG [RS:0;jenkins-hbase17:46255] zookeeper.ReadOnlyZKClient(139): Connect 0x34e7ffc8 to 127.0.0.1:63555 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:07,719 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 11:17:07,731 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 11:17:07,732 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 11:17:07,736 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:07,780 DEBUG [RS:2;jenkins-hbase17:33011] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d37a299, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:07,781 DEBUG [RS:2;jenkins-hbase17:33011] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b9a37d1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:07,790 DEBUG [RS:0;jenkins-hbase17:46255] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@63d75ea, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:07,790 DEBUG [RS:0;jenkins-hbase17:46255] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d9b5475, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:07,793 DEBUG [RS:1;jenkins-hbase17:36863] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53dc265c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:07,794 DEBUG [RS:1;jenkins-hbase17:36863] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2db383d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:07,839 DEBUG [RS:1;jenkins-hbase17:36863] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:36863 2023-07-21 11:17:07,842 DEBUG [RS:0;jenkins-hbase17:46255] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:46255 2023-07-21 11:17:07,858 INFO [RS:0;jenkins-hbase17:46255] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:17:07,859 INFO [RS:0;jenkins-hbase17:46255] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:17:07,859 DEBUG [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:17:07,859 INFO [RS:1;jenkins-hbase17:36863] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:17:07,863 INFO [RS:1;jenkins-hbase17:36863] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:17:07,863 DEBUG [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:17:07,867 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,40703,1689938222766 with isa=jenkins-hbase17.apache.org/136.243.18.41:46255, startcode=1689938224878 2023-07-21 11:17:07,868 INFO [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,40703,1689938222766 with isa=jenkins-hbase17.apache.org/136.243.18.41:36863, startcode=1689938225106 2023-07-21 11:17:07,871 DEBUG [RS:2;jenkins-hbase17:33011] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase17:33011 2023-07-21 11:17:07,871 INFO [RS:2;jenkins-hbase17:33011] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:17:07,871 INFO [RS:2;jenkins-hbase17:33011] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:17:07,871 DEBUG [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:17:07,873 INFO [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,40703,1689938222766 with isa=jenkins-hbase17.apache.org/136.243.18.41:33011, startcode=1689938225358 2023-07-21 11:17:07,899 DEBUG [RS:0;jenkins-hbase17:46255] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:17:07,900 DEBUG [RS:1;jenkins-hbase17:36863] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:17:07,908 DEBUG [RS:2;jenkins-hbase17:33011] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:17:07,936 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 11:17:08,058 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 11:17:08,063 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 11:17:08,063 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 11:17:08,064 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 11:17:08,065 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:17:08,066 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:17:08,066 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:17:08,066 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:17:08,066 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-07-21 11:17:08,066 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,066 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:08,067 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,072 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:48965, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:17:08,074 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:49039, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:17:08,072 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:58299, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:17:08,082 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689938258082 2023-07-21 11:17:08,086 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 11:17:08,092 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 11:17:08,092 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 11:17:08,093 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 11:17:08,094 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:08,100 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:08,113 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 11:17:08,114 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 11:17:08,114 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 11:17:08,115 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 11:17:08,118 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,118 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:08,120 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:08,128 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 11:17:08,131 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 11:17:08,131 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 11:17:08,136 DEBUG [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 11:17:08,137 WARN [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 11:17:08,137 DEBUG [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 11:17:08,137 WARN [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 11:17:08,145 DEBUG [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 11:17:08,145 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 11:17:08,145 WARN [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 11:17:08,150 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 11:17:08,153 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938228153,5,FailOnTimeoutGroup] 2023-07-21 11:17:08,154 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938228153,5,FailOnTimeoutGroup] 2023-07-21 11:17:08,154 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,154 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 11:17:08,156 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,156 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,227 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:08,228 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:08,229 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6 2023-07-21 11:17:08,241 INFO [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,40703,1689938222766 with isa=jenkins-hbase17.apache.org/136.243.18.41:36863, startcode=1689938225106 2023-07-21 11:17:08,241 INFO [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,40703,1689938222766 with isa=jenkins-hbase17.apache.org/136.243.18.41:33011, startcode=1689938225358 2023-07-21 11:17:08,252 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40703] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:08,252 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,40703,1689938222766 with isa=jenkins-hbase17.apache.org/136.243.18.41:46255, startcode=1689938224878 2023-07-21 11:17:08,253 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:08,254 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 11:17:08,259 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40703] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:08,259 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:08,260 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 11:17:08,260 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40703] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:08,261 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:08,261 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 11:17:08,279 DEBUG [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6 2023-07-21 11:17:08,279 DEBUG [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38415 2023-07-21 11:17:08,279 DEBUG [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39495 2023-07-21 11:17:08,284 DEBUG [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6 2023-07-21 11:17:08,285 DEBUG [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38415 2023-07-21 11:17:08,285 DEBUG [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39495 2023-07-21 11:17:08,294 DEBUG [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6 2023-07-21 11:17:08,294 DEBUG [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38415 2023-07-21 11:17:08,294 DEBUG [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39495 2023-07-21 11:17:08,302 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:08,303 DEBUG [RS:2;jenkins-hbase17:33011] zookeeper.ZKUtil(162): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:08,304 DEBUG [RS:1;jenkins-hbase17:36863] zookeeper.ZKUtil(162): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:08,304 DEBUG [RS:0;jenkins-hbase17:46255] zookeeper.ZKUtil(162): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:08,304 WARN [RS:2;jenkins-hbase17:33011] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:08,307 INFO [RS:2;jenkins-hbase17:33011] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:08,307 DEBUG [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:08,304 WARN [RS:1;jenkins-hbase17:36863] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:08,304 WARN [RS:0;jenkins-hbase17:46255] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:08,308 INFO [RS:1;jenkins-hbase17:36863] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:08,327 DEBUG [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:08,308 INFO [RS:0;jenkins-hbase17:46255] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:08,340 DEBUG [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:08,353 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,46255,1689938224878] 2023-07-21 11:17:08,353 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,36863,1689938225106] 2023-07-21 11:17:08,353 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,33011,1689938225358] 2023-07-21 11:17:08,366 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:08,381 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 11:17:08,384 DEBUG [RS:1;jenkins-hbase17:36863] zookeeper.ZKUtil(162): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:08,385 DEBUG [RS:0;jenkins-hbase17:46255] zookeeper.ZKUtil(162): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:08,385 DEBUG [RS:1;jenkins-hbase17:36863] zookeeper.ZKUtil(162): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:08,386 DEBUG [RS:0;jenkins-hbase17:46255] zookeeper.ZKUtil(162): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:08,386 DEBUG [RS:1;jenkins-hbase17:36863] zookeeper.ZKUtil(162): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:08,387 DEBUG [RS:2;jenkins-hbase17:33011] zookeeper.ZKUtil(162): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:08,387 DEBUG [RS:0;jenkins-hbase17:46255] zookeeper.ZKUtil(162): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:08,388 DEBUG [RS:2;jenkins-hbase17:33011] zookeeper.ZKUtil(162): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:08,389 DEBUG [RS:2;jenkins-hbase17:33011] zookeeper.ZKUtil(162): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:08,397 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/info 2023-07-21 11:17:08,398 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 11:17:08,399 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:08,400 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 11:17:08,405 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:17:08,406 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 11:17:08,407 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:08,408 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 11:17:08,417 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/table 2023-07-21 11:17:08,418 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 11:17:08,419 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:08,421 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740 2023-07-21 11:17:08,425 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740 2023-07-21 11:17:08,430 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 11:17:08,435 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 11:17:08,436 DEBUG [RS:0;jenkins-hbase17:46255] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:17:08,436 DEBUG [RS:2;jenkins-hbase17:33011] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:17:08,436 DEBUG [RS:1;jenkins-hbase17:36863] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:17:08,452 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:08,453 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11412477280, jitterRate=0.06286977231502533}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 11:17:08,453 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 11:17:08,453 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 11:17:08,454 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 11:17:08,454 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 11:17:08,454 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 11:17:08,454 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 11:17:08,456 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 11:17:08,456 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 11:17:08,457 INFO [RS:0;jenkins-hbase17:46255] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:17:08,457 INFO [RS:1;jenkins-hbase17:36863] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:17:08,457 INFO [RS:2;jenkins-hbase17:33011] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:17:08,464 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 11:17:08,465 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 11:17:08,479 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 11:17:08,485 INFO [RS:2;jenkins-hbase17:33011] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:17:08,485 INFO [RS:0;jenkins-hbase17:46255] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:17:08,485 INFO [RS:1;jenkins-hbase17:36863] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:17:08,491 INFO [RS:1;jenkins-hbase17:36863] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:17:08,491 INFO [RS:2;jenkins-hbase17:33011] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:17:08,492 INFO [RS:1;jenkins-hbase17:36863] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,493 INFO [RS:0;jenkins-hbase17:46255] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:17:08,493 INFO [RS:0;jenkins-hbase17:46255] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,494 INFO [RS:2;jenkins-hbase17:33011] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,494 INFO [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:17:08,494 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:17:08,494 INFO [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:17:08,502 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 11:17:08,514 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 11:17:08,517 INFO [RS:2;jenkins-hbase17:33011] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,517 DEBUG [RS:2;jenkins-hbase17:33011] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,518 DEBUG [RS:2;jenkins-hbase17:33011] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,518 DEBUG [RS:2;jenkins-hbase17:33011] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,518 DEBUG [RS:2;jenkins-hbase17:33011] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,518 DEBUG [RS:2;jenkins-hbase17:33011] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,519 DEBUG [RS:2;jenkins-hbase17:33011] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:08,519 DEBUG [RS:2;jenkins-hbase17:33011] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,519 INFO [RS:1;jenkins-hbase17:36863] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,519 DEBUG [RS:2;jenkins-hbase17:33011] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,519 INFO [RS:0;jenkins-hbase17:46255] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,520 DEBUG [RS:2;jenkins-hbase17:33011] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,519 DEBUG [RS:1;jenkins-hbase17:36863] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,520 DEBUG [RS:2;jenkins-hbase17:33011] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,520 DEBUG [RS:1;jenkins-hbase17:36863] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,520 DEBUG [RS:0;jenkins-hbase17:46255] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,520 DEBUG [RS:1;jenkins-hbase17:36863] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,520 DEBUG [RS:0;jenkins-hbase17:46255] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,521 DEBUG [RS:1;jenkins-hbase17:36863] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,521 DEBUG [RS:0;jenkins-hbase17:46255] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,521 DEBUG [RS:1;jenkins-hbase17:36863] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,521 DEBUG [RS:0;jenkins-hbase17:46255] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,521 DEBUG [RS:1;jenkins-hbase17:36863] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:08,521 DEBUG [RS:0;jenkins-hbase17:46255] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,521 DEBUG [RS:1;jenkins-hbase17:36863] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,521 DEBUG [RS:0;jenkins-hbase17:46255] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:08,521 DEBUG [RS:1;jenkins-hbase17:36863] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,521 DEBUG [RS:0;jenkins-hbase17:46255] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,521 DEBUG [RS:1;jenkins-hbase17:36863] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,521 DEBUG [RS:0;jenkins-hbase17:46255] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,521 DEBUG [RS:1;jenkins-hbase17:36863] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,522 DEBUG [RS:0;jenkins-hbase17:46255] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,522 DEBUG [RS:0;jenkins-hbase17:46255] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:08,522 INFO [RS:2;jenkins-hbase17:33011] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,522 INFO [RS:2;jenkins-hbase17:33011] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,522 INFO [RS:2;jenkins-hbase17:33011] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,552 INFO [RS:1;jenkins-hbase17:36863] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,552 INFO [RS:1;jenkins-hbase17:36863] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,552 INFO [RS:1;jenkins-hbase17:36863] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,566 INFO [RS:2;jenkins-hbase17:33011] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:17:08,570 INFO [RS:2;jenkins-hbase17:33011] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33011,1689938225358-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,571 INFO [RS:0;jenkins-hbase17:46255] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,571 INFO [RS:0;jenkins-hbase17:46255] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,571 INFO [RS:0;jenkins-hbase17:46255] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,588 INFO [RS:1;jenkins-hbase17:36863] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:17:08,603 INFO [RS:1;jenkins-hbase17:36863] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36863,1689938225106-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,618 INFO [RS:0;jenkins-hbase17:46255] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:17:08,618 INFO [RS:0;jenkins-hbase17:46255] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,46255,1689938224878-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:08,638 INFO [RS:2;jenkins-hbase17:33011] regionserver.Replication(203): jenkins-hbase17.apache.org,33011,1689938225358 started 2023-07-21 11:17:08,638 INFO [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,33011,1689938225358, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:33011, sessionid=0x101879855f50003 2023-07-21 11:17:08,641 INFO [RS:1;jenkins-hbase17:36863] regionserver.Replication(203): jenkins-hbase17.apache.org,36863,1689938225106 started 2023-07-21 11:17:08,641 DEBUG [RS:2;jenkins-hbase17:33011] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:17:08,641 INFO [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,36863,1689938225106, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:36863, sessionid=0x101879855f50002 2023-07-21 11:17:08,641 DEBUG [RS:2;jenkins-hbase17:33011] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:08,641 DEBUG [RS:1;jenkins-hbase17:36863] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:17:08,641 DEBUG [RS:2;jenkins-hbase17:33011] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33011,1689938225358' 2023-07-21 11:17:08,641 DEBUG [RS:1;jenkins-hbase17:36863] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:08,649 DEBUG [RS:2;jenkins-hbase17:33011] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:17:08,649 DEBUG [RS:1;jenkins-hbase17:36863] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36863,1689938225106' 2023-07-21 11:17:08,664 DEBUG [RS:1;jenkins-hbase17:36863] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:17:08,667 DEBUG [jenkins-hbase17:40703] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 11:17:08,673 INFO [RS:0;jenkins-hbase17:46255] regionserver.Replication(203): jenkins-hbase17.apache.org,46255,1689938224878 started 2023-07-21 11:17:08,673 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,46255,1689938224878, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:46255, sessionid=0x101879855f50001 2023-07-21 11:17:08,673 DEBUG [RS:0;jenkins-hbase17:46255] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:17:08,673 DEBUG [RS:0;jenkins-hbase17:46255] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:08,673 DEBUG [RS:0;jenkins-hbase17:46255] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,46255,1689938224878' 2023-07-21 11:17:08,674 DEBUG [RS:0;jenkins-hbase17:46255] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:17:08,677 DEBUG [RS:1;jenkins-hbase17:36863] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:17:08,677 DEBUG [RS:2;jenkins-hbase17:33011] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:17:08,677 DEBUG [RS:0;jenkins-hbase17:46255] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:17:08,678 DEBUG [RS:1;jenkins-hbase17:36863] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:17:08,678 DEBUG [RS:1;jenkins-hbase17:36863] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:17:08,678 DEBUG [RS:1;jenkins-hbase17:36863] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:08,678 DEBUG [RS:1;jenkins-hbase17:36863] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36863,1689938225106' 2023-07-21 11:17:08,678 DEBUG [RS:1;jenkins-hbase17:36863] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:17:08,678 DEBUG [RS:2;jenkins-hbase17:33011] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:17:08,678 DEBUG [RS:2;jenkins-hbase17:33011] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:17:08,678 DEBUG [RS:2;jenkins-hbase17:33011] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:08,679 DEBUG [RS:2;jenkins-hbase17:33011] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33011,1689938225358' 2023-07-21 11:17:08,679 DEBUG [RS:2;jenkins-hbase17:33011] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:17:08,680 DEBUG [RS:0;jenkins-hbase17:46255] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:17:08,681 DEBUG [RS:0;jenkins-hbase17:46255] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:17:08,681 DEBUG [RS:0;jenkins-hbase17:46255] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:08,681 DEBUG [RS:0;jenkins-hbase17:46255] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,46255,1689938224878' 2023-07-21 11:17:08,681 DEBUG [RS:0;jenkins-hbase17:46255] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:17:08,681 DEBUG [RS:1;jenkins-hbase17:36863] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:17:08,682 DEBUG [RS:2;jenkins-hbase17:33011] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:17:08,682 DEBUG [RS:0;jenkins-hbase17:46255] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:17:08,682 DEBUG [RS:1;jenkins-hbase17:36863] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:17:08,682 DEBUG [RS:2;jenkins-hbase17:33011] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:17:08,682 INFO [RS:1;jenkins-hbase17:36863] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:17:08,683 INFO [RS:1;jenkins-hbase17:36863] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:17:08,682 INFO [RS:2;jenkins-hbase17:33011] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:17:08,683 INFO [RS:2;jenkins-hbase17:33011] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:17:08,688 DEBUG [RS:0;jenkins-hbase17:46255] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:17:08,688 INFO [RS:0;jenkins-hbase17:46255] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:17:08,688 INFO [RS:0;jenkins-hbase17:46255] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:17:08,691 DEBUG [jenkins-hbase17:40703] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:08,695 DEBUG [jenkins-hbase17:40703] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:08,695 DEBUG [jenkins-hbase17:40703] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:08,695 DEBUG [jenkins-hbase17:40703] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:08,695 DEBUG [jenkins-hbase17:40703] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:08,701 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,46255,1689938224878, state=OPENING 2023-07-21 11:17:08,710 DEBUG [PEWorker-5] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 11:17:08,711 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:08,711 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:17:08,717 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:08,799 INFO [RS:2;jenkins-hbase17:33011] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C33011%2C1689938225358, suffix=, logDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,33011,1689938225358, archiveDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/oldWALs, maxLogs=32 2023-07-21 11:17:08,799 INFO [RS:0;jenkins-hbase17:46255] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C46255%2C1689938224878, suffix=, logDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,46255,1689938224878, archiveDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/oldWALs, maxLogs=32 2023-07-21 11:17:08,813 INFO [RS:1;jenkins-hbase17:36863] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C36863%2C1689938225106, suffix=, logDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,36863,1689938225106, archiveDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/oldWALs, maxLogs=32 2023-07-21 11:17:08,894 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43969,DS-439a2015-2672-456c-b982-719bc01aa0de,DISK] 2023-07-21 11:17:08,896 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37605,DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3,DISK] 2023-07-21 11:17:08,900 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45611,DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f,DISK] 2023-07-21 11:17:08,900 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37605,DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3,DISK] 2023-07-21 11:17:08,900 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45611,DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f,DISK] 2023-07-21 11:17:08,901 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43969,DS-439a2015-2672-456c-b982-719bc01aa0de,DISK] 2023-07-21 11:17:08,918 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45611,DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f,DISK] 2023-07-21 11:17:08,959 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:08,966 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:08,978 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43969,DS-439a2015-2672-456c-b982-719bc01aa0de,DISK] 2023-07-21 11:17:08,979 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37605,DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3,DISK] 2023-07-21 11:17:09,009 INFO [RS:1;jenkins-hbase17:36863] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,36863,1689938225106/jenkins-hbase17.apache.org%2C36863%2C1689938225106.1689938228817 2023-07-21 11:17:09,018 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:53370, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:09,023 DEBUG [RS:1;jenkins-hbase17:36863] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43969,DS-439a2015-2672-456c-b982-719bc01aa0de,DISK], DatanodeInfoWithStorage[127.0.0.1:37605,DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3,DISK], DatanodeInfoWithStorage[127.0.0.1:45611,DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f,DISK]] 2023-07-21 11:17:09,047 INFO [RS:0;jenkins-hbase17:46255] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,46255,1689938224878/jenkins-hbase17.apache.org%2C46255%2C1689938224878.1689938228806 2023-07-21 11:17:09,052 DEBUG [RS:0;jenkins-hbase17:46255] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45611,DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f,DISK], DatanodeInfoWithStorage[127.0.0.1:43969,DS-439a2015-2672-456c-b982-719bc01aa0de,DISK], DatanodeInfoWithStorage[127.0.0.1:37605,DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3,DISK]] 2023-07-21 11:17:09,184 INFO [RS:2;jenkins-hbase17:33011] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,33011,1689938225358/jenkins-hbase17.apache.org%2C33011%2C1689938225358.1689938228806 2023-07-21 11:17:09,188 DEBUG [RS:2;jenkins-hbase17:33011] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37605,DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3,DISK], DatanodeInfoWithStorage[127.0.0.1:45611,DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f,DISK], DatanodeInfoWithStorage[127.0.0.1:43969,DS-439a2015-2672-456c-b982-719bc01aa0de,DISK]] 2023-07-21 11:17:09,197 WARN [ReadOnlyZKClient-127.0.0.1:63555@0x2e0f64c4] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 11:17:09,299 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,40703,1689938222766] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:09,302 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 11:17:09,302 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:09,324 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:53380, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:09,326 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C46255%2C1689938224878.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,46255,1689938224878, archiveDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/oldWALs, maxLogs=32 2023-07-21 11:17:09,333 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46255] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:53380 deadline: 1689938289329, exception=org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region hbase:meta,,1 is opening on jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:09,399 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45611,DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f,DISK] 2023-07-21 11:17:09,406 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43969,DS-439a2015-2672-456c-b982-719bc01aa0de,DISK] 2023-07-21 11:17:09,406 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37605,DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3,DISK] 2023-07-21 11:17:09,450 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,46255,1689938224878/jenkins-hbase17.apache.org%2C46255%2C1689938224878.meta.1689938229328.meta 2023-07-21 11:17:09,472 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43969,DS-439a2015-2672-456c-b982-719bc01aa0de,DISK], DatanodeInfoWithStorage[127.0.0.1:37605,DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3,DISK], DatanodeInfoWithStorage[127.0.0.1:45611,DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f,DISK]] 2023-07-21 11:17:09,473 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:09,475 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:17:09,511 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 11:17:09,525 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 11:17:09,534 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 11:17:09,534 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:09,534 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 11:17:09,534 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 11:17:09,544 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 11:17:09,549 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/info 2023-07-21 11:17:09,550 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/info 2023-07-21 11:17:09,552 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 11:17:09,553 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:09,554 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 11:17:09,557 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:17:09,557 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:17:09,561 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 11:17:09,563 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:09,563 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 11:17:09,566 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/table 2023-07-21 11:17:09,566 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/table 2023-07-21 11:17:09,567 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 11:17:09,568 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:09,579 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740 2023-07-21 11:17:09,584 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740 2023-07-21 11:17:09,593 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 11:17:09,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 11:17:09,605 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10348056000, jitterRate=-0.036262184381484985}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 11:17:09,605 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 11:17:09,634 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689938228935 2023-07-21 11:17:09,678 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 11:17:09,682 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 11:17:09,687 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,46255,1689938224878, state=OPEN 2023-07-21 11:17:09,690 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:17:09,691 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:17:09,700 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 11:17:09,701 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,46255,1689938224878 in 974 msec 2023-07-21 11:17:09,719 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 11:17:09,719 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 1.2260 sec 2023-07-21 11:17:09,733 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.9790 sec 2023-07-21 11:17:09,736 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689938229736, completionTime=-1 2023-07-21 11:17:09,736 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 11:17:09,736 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 11:17:09,829 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 11:17:09,829 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689938289829 2023-07-21 11:17:09,829 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689938349829 2023-07-21 11:17:09,829 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 92 msec 2023-07-21 11:17:09,853 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,40703,1689938222766-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:09,853 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,40703,1689938222766-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:09,853 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,40703,1689938222766-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:09,856 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:40703, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:09,857 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:09,869 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 11:17:09,887 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 11:17:09,889 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:09,901 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 11:17:09,907 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:09,910 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:09,929 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/hbase/namespace/6c58d1ae91a12fe87aa9927da34b36d2 2023-07-21 11:17:09,932 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/hbase/namespace/6c58d1ae91a12fe87aa9927da34b36d2 empty. 2023-07-21 11:17:09,933 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/hbase/namespace/6c58d1ae91a12fe87aa9927da34b36d2 2023-07-21 11:17:09,934 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 11:17:10,046 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:10,053 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6c58d1ae91a12fe87aa9927da34b36d2, NAME => 'hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:10,165 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:10,165 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 6c58d1ae91a12fe87aa9927da34b36d2, disabling compactions & flushes 2023-07-21 11:17:10,165 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. 2023-07-21 11:17:10,165 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. 2023-07-21 11:17:10,166 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. after waiting 0 ms 2023-07-21 11:17:10,166 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. 2023-07-21 11:17:10,166 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. 2023-07-21 11:17:10,166 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 6c58d1ae91a12fe87aa9927da34b36d2: 2023-07-21 11:17:10,172 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:10,197 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938230178"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938230178"}]},"ts":"1689938230178"} 2023-07-21 11:17:10,244 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:10,246 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:10,252 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938230246"}]},"ts":"1689938230246"} 2023-07-21 11:17:10,256 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 11:17:10,259 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:10,260 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:10,260 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:10,260 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:10,260 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:10,262 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6c58d1ae91a12fe87aa9927da34b36d2, ASSIGN}] 2023-07-21 11:17:10,266 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6c58d1ae91a12fe87aa9927da34b36d2, ASSIGN 2023-07-21 11:17:10,274 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=6c58d1ae91a12fe87aa9927da34b36d2, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46255,1689938224878; forceNewPlan=false, retain=false 2023-07-21 11:17:10,370 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,40703,1689938222766] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:10,373 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,40703,1689938222766] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 11:17:10,377 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:10,379 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:10,385 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:10,386 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4 empty. 2023-07-21 11:17:10,387 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:10,387 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 11:17:10,421 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:10,423 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0d251b6fcd6df4af958f1fccdfdc34e4, NAME => 'hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:10,425 INFO [jenkins-hbase17:40703] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:10,427 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6c58d1ae91a12fe87aa9927da34b36d2, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:10,428 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938230427"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938230427"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938230427"}]},"ts":"1689938230427"} 2023-07-21 11:17:10,437 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 6c58d1ae91a12fe87aa9927da34b36d2, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:10,459 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:10,459 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 0d251b6fcd6df4af958f1fccdfdc34e4, disabling compactions & flushes 2023-07-21 11:17:10,459 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:10,459 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:10,459 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. after waiting 0 ms 2023-07-21 11:17:10,459 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:10,459 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:10,459 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 0d251b6fcd6df4af958f1fccdfdc34e4: 2023-07-21 11:17:10,465 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:10,473 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938230472"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938230472"}]},"ts":"1689938230472"} 2023-07-21 11:17:10,477 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:10,481 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:10,481 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938230481"}]},"ts":"1689938230481"} 2023-07-21 11:17:10,490 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 11:17:10,499 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:10,499 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:10,499 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:10,499 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:10,499 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:10,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0d251b6fcd6df4af958f1fccdfdc34e4, ASSIGN}] 2023-07-21 11:17:10,506 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0d251b6fcd6df4af958f1fccdfdc34e4, ASSIGN 2023-07-21 11:17:10,509 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0d251b6fcd6df4af958f1fccdfdc34e4, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,33011,1689938225358; forceNewPlan=false, retain=false 2023-07-21 11:17:10,607 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. 2023-07-21 11:17:10,608 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6c58d1ae91a12fe87aa9927da34b36d2, NAME => 'hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:10,609 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 6c58d1ae91a12fe87aa9927da34b36d2 2023-07-21 11:17:10,609 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:10,609 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 6c58d1ae91a12fe87aa9927da34b36d2 2023-07-21 11:17:10,609 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 6c58d1ae91a12fe87aa9927da34b36d2 2023-07-21 11:17:10,612 INFO [StoreOpener-6c58d1ae91a12fe87aa9927da34b36d2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6c58d1ae91a12fe87aa9927da34b36d2 2023-07-21 11:17:10,615 DEBUG [StoreOpener-6c58d1ae91a12fe87aa9927da34b36d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/namespace/6c58d1ae91a12fe87aa9927da34b36d2/info 2023-07-21 11:17:10,615 DEBUG [StoreOpener-6c58d1ae91a12fe87aa9927da34b36d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/namespace/6c58d1ae91a12fe87aa9927da34b36d2/info 2023-07-21 11:17:10,615 INFO [StoreOpener-6c58d1ae91a12fe87aa9927da34b36d2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6c58d1ae91a12fe87aa9927da34b36d2 columnFamilyName info 2023-07-21 11:17:10,616 INFO [StoreOpener-6c58d1ae91a12fe87aa9927da34b36d2-1] regionserver.HStore(310): Store=6c58d1ae91a12fe87aa9927da34b36d2/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:10,618 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/namespace/6c58d1ae91a12fe87aa9927da34b36d2 2023-07-21 11:17:10,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/namespace/6c58d1ae91a12fe87aa9927da34b36d2 2023-07-21 11:17:10,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 6c58d1ae91a12fe87aa9927da34b36d2 2023-07-21 11:17:10,625 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/namespace/6c58d1ae91a12fe87aa9927da34b36d2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:10,626 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 6c58d1ae91a12fe87aa9927da34b36d2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11687890400, jitterRate=0.0885196179151535}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:10,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 6c58d1ae91a12fe87aa9927da34b36d2: 2023-07-21 11:17:10,628 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2., pid=7, masterSystemTime=1689938230601 2023-07-21 11:17:10,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. 2023-07-21 11:17:10,631 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. 2023-07-21 11:17:10,633 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6c58d1ae91a12fe87aa9927da34b36d2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:10,633 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938230632"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938230632"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938230632"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938230632"}]},"ts":"1689938230632"} 2023-07-21 11:17:10,643 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-21 11:17:10,643 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 6c58d1ae91a12fe87aa9927da34b36d2, server=jenkins-hbase17.apache.org,46255,1689938224878 in 201 msec 2023-07-21 11:17:10,650 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-21 11:17:10,651 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=6c58d1ae91a12fe87aa9927da34b36d2, ASSIGN in 381 msec 2023-07-21 11:17:10,652 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:10,653 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938230653"}]},"ts":"1689938230653"} 2023-07-21 11:17:10,658 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 11:17:10,659 INFO [jenkins-hbase17:40703] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:10,660 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=0d251b6fcd6df4af958f1fccdfdc34e4, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:10,661 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938230660"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938230660"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938230660"}]},"ts":"1689938230660"} 2023-07-21 11:17:10,665 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure 0d251b6fcd6df4af958f1fccdfdc34e4, server=jenkins-hbase17.apache.org,33011,1689938225358}] 2023-07-21 11:17:10,667 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:10,670 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 777 msec 2023-07-21 11:17:10,714 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 11:17:10,715 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:17:10,715 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:10,749 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 11:17:10,765 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:17:10,772 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 30 msec 2023-07-21 11:17:10,782 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 11:17:10,785 DEBUG [PEWorker-2] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-21 11:17:10,785 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 11:17:10,820 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:10,820 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:10,827 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:52804, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:10,831 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:10,832 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0d251b6fcd6df4af958f1fccdfdc34e4, NAME => 'hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:10,832 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:17:10,832 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. service=MultiRowMutationService 2023-07-21 11:17:10,833 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 11:17:10,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:10,834 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:10,834 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:10,834 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:10,836 INFO [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:10,838 DEBUG [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m 2023-07-21 11:17:10,838 DEBUG [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m 2023-07-21 11:17:10,838 INFO [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0d251b6fcd6df4af958f1fccdfdc34e4 columnFamilyName m 2023-07-21 11:17:10,839 INFO [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] regionserver.HStore(310): Store=0d251b6fcd6df4af958f1fccdfdc34e4/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:10,841 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:10,844 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:10,849 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:10,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:10,856 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 0d251b6fcd6df4af958f1fccdfdc34e4; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@6a594594, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:10,856 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 0d251b6fcd6df4af958f1fccdfdc34e4: 2023-07-21 11:17:10,858 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4., pid=9, masterSystemTime=1689938230820 2023-07-21 11:17:10,862 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:10,863 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:10,864 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=0d251b6fcd6df4af958f1fccdfdc34e4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:10,865 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938230864"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938230864"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938230864"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938230864"}]},"ts":"1689938230864"} 2023-07-21 11:17:10,877 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-21 11:17:10,877 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure 0d251b6fcd6df4af958f1fccdfdc34e4, server=jenkins-hbase17.apache.org,33011,1689938225358 in 203 msec 2023-07-21 11:17:10,882 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-21 11:17:10,883 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0d251b6fcd6df4af958f1fccdfdc34e4, ASSIGN in 378 msec 2023-07-21 11:17:10,899 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:17:10,907 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 123 msec 2023-07-21 11:17:10,911 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:10,912 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938230912"}]},"ts":"1689938230912"} 2023-07-21 11:17:10,915 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 11:17:10,918 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:10,920 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 11:17:10,922 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 11:17:10,923 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 5.248sec 2023-07-21 11:17:10,925 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 549 msec 2023-07-21 11:17:10,925 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 11:17:10,926 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 11:17:10,927 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 11:17:10,928 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,40703,1689938222766-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 11:17:10,929 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,40703,1689938222766-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 11:17:10,937 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 11:17:10,979 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,40703,1689938222766] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:10,981 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:52808, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:10,986 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 11:17:10,986 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 11:17:11,011 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ReadOnlyZKClient(139): Connect 0x5d8a744a to 127.0.0.1:63555 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:11,032 DEBUG [Listener at localhost.localdomain/38409] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@70f9a96b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:11,068 DEBUG [hconnection-0x595febc4-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:11,094 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:53382, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:11,108 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,40703,1689938222766 2023-07-21 11:17:11,110 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:11,133 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 11:17:11,135 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:11,135 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:11,145 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:42872, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 11:17:11,151 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 11:17:11,164 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 11:17:11,167 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 11:17:11,167 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:11,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=false 2023-07-21 11:17:11,175 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ReadOnlyZKClient(139): Connect 0x12fbbfab to 127.0.0.1:63555 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:11,198 DEBUG [Listener at localhost.localdomain/38409] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19d2794a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:11,199 INFO [Listener at localhost.localdomain/38409] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:63555 2023-07-21 11:17:11,233 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:11,243 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101879855f5000a connected 2023-07-21 11:17:11,284 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=420, OpenFileDescriptor=671, MaxFileDescriptor=60000, SystemLoadAverage=668, ProcessCount=186, AvailableMemoryMB=2361 2023-07-21 11:17:11,288 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-21 11:17:11,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:11,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:11,386 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 11:17:11,407 INFO [Listener at localhost.localdomain/38409] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:11,407 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:11,407 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:11,408 INFO [Listener at localhost.localdomain/38409] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:11,408 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:11,408 INFO [Listener at localhost.localdomain/38409] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:11,408 INFO [Listener at localhost.localdomain/38409] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:11,414 INFO [Listener at localhost.localdomain/38409] ipc.NettyRpcServer(120): Bind to /136.243.18.41:35009 2023-07-21 11:17:11,414 INFO [Listener at localhost.localdomain/38409] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:17:11,416 DEBUG [Listener at localhost.localdomain/38409] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:17:11,418 INFO [Listener at localhost.localdomain/38409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:11,426 INFO [Listener at localhost.localdomain/38409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:11,430 INFO [Listener at localhost.localdomain/38409] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35009 connecting to ZooKeeper ensemble=127.0.0.1:63555 2023-07-21 11:17:11,437 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:350090x0, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:11,442 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(162): regionserver:350090x0, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:17:11,444 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35009-0x101879855f5000b connected 2023-07-21 11:17:11,445 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(162): regionserver:35009-0x101879855f5000b, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 11:17:11,448 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ZKUtil(164): regionserver:35009-0x101879855f5000b, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:11,452 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35009 2023-07-21 11:17:11,454 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35009 2023-07-21 11:17:11,455 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35009 2023-07-21 11:17:11,469 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35009 2023-07-21 11:17:11,469 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35009 2023-07-21 11:17:11,472 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:11,472 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:11,472 INFO [Listener at localhost.localdomain/38409] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:11,473 INFO [Listener at localhost.localdomain/38409] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:17:11,473 INFO [Listener at localhost.localdomain/38409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:11,473 INFO [Listener at localhost.localdomain/38409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:11,474 INFO [Listener at localhost.localdomain/38409] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:11,474 INFO [Listener at localhost.localdomain/38409] http.HttpServer(1146): Jetty bound to port 40483 2023-07-21 11:17:11,475 INFO [Listener at localhost.localdomain/38409] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:11,496 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:11,497 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4efade12{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:11,497 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:11,497 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b7983ad{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:11,610 INFO [Listener at localhost.localdomain/38409] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:11,612 INFO [Listener at localhost.localdomain/38409] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:11,612 INFO [Listener at localhost.localdomain/38409] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:11,612 INFO [Listener at localhost.localdomain/38409] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 11:17:11,624 INFO [Listener at localhost.localdomain/38409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:11,625 INFO [Listener at localhost.localdomain/38409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@37afa654{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/java.io.tmpdir/jetty-0_0_0_0-40483-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1060153893890783528/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:11,627 INFO [Listener at localhost.localdomain/38409] server.AbstractConnector(333): Started ServerConnector@12b776cf{HTTP/1.1, (http/1.1)}{0.0.0.0:40483} 2023-07-21 11:17:11,627 INFO [Listener at localhost.localdomain/38409] server.Server(415): Started @14659ms 2023-07-21 11:17:11,656 INFO [RS:3;jenkins-hbase17:35009] regionserver.HRegionServer(951): ClusterId : af5cee8c-4392-4958-8708-9768a3b62dfe 2023-07-21 11:17:11,659 DEBUG [RS:3;jenkins-hbase17:35009] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:17:11,662 DEBUG [RS:3;jenkins-hbase17:35009] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:17:11,662 DEBUG [RS:3;jenkins-hbase17:35009] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:17:11,664 DEBUG [RS:3;jenkins-hbase17:35009] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:17:11,666 DEBUG [RS:3;jenkins-hbase17:35009] zookeeper.ReadOnlyZKClient(139): Connect 0x09c2e566 to 127.0.0.1:63555 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:11,684 DEBUG [RS:3;jenkins-hbase17:35009] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7051a2e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:11,684 DEBUG [RS:3;jenkins-hbase17:35009] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33a8e16, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:11,693 DEBUG [RS:3;jenkins-hbase17:35009] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase17:35009 2023-07-21 11:17:11,693 INFO [RS:3;jenkins-hbase17:35009] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:17:11,694 INFO [RS:3;jenkins-hbase17:35009] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:17:11,694 DEBUG [RS:3;jenkins-hbase17:35009] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:17:11,694 INFO [RS:3;jenkins-hbase17:35009] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,40703,1689938222766 with isa=jenkins-hbase17.apache.org/136.243.18.41:35009, startcode=1689938231406 2023-07-21 11:17:11,695 DEBUG [RS:3;jenkins-hbase17:35009] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:17:11,701 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34961, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:17:11,702 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40703] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:11,702 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:11,702 DEBUG [RS:3;jenkins-hbase17:35009] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6 2023-07-21 11:17:11,703 DEBUG [RS:3;jenkins-hbase17:35009] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38415 2023-07-21 11:17:11,703 DEBUG [RS:3;jenkins-hbase17:35009] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39495 2023-07-21 11:17:11,707 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:11,707 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:11,707 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:11,707 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:11,708 DEBUG [RS:3;jenkins-hbase17:35009] zookeeper.ZKUtil(162): regionserver:35009-0x101879855f5000b, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:11,708 WARN [RS:3;jenkins-hbase17:35009] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:11,708 INFO [RS:3;jenkins-hbase17:35009] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:11,708 DEBUG [RS:3;jenkins-hbase17:35009] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:11,708 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:11,709 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:11,710 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 11:17:11,710 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,35009,1689938231406] 2023-07-21 11:17:11,710 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:11,710 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:11,710 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:11,722 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:11,723 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:11,723 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:11,723 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:11,725 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:11,725 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,40703,1689938222766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 11:17:11,725 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:11,726 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:11,727 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:11,729 DEBUG [RS:3;jenkins-hbase17:35009] zookeeper.ZKUtil(162): regionserver:35009-0x101879855f5000b, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:11,729 DEBUG [RS:3;jenkins-hbase17:35009] zookeeper.ZKUtil(162): regionserver:35009-0x101879855f5000b, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:11,729 DEBUG [RS:3;jenkins-hbase17:35009] zookeeper.ZKUtil(162): regionserver:35009-0x101879855f5000b, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:11,730 DEBUG [RS:3;jenkins-hbase17:35009] zookeeper.ZKUtil(162): regionserver:35009-0x101879855f5000b, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:11,731 DEBUG [RS:3;jenkins-hbase17:35009] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:17:11,731 INFO [RS:3;jenkins-hbase17:35009] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:17:11,736 INFO [RS:3;jenkins-hbase17:35009] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:17:11,737 INFO [RS:3;jenkins-hbase17:35009] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:17:11,737 INFO [RS:3;jenkins-hbase17:35009] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:11,737 INFO [RS:3;jenkins-hbase17:35009] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:17:11,740 INFO [RS:3;jenkins-hbase17:35009] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:11,740 DEBUG [RS:3;jenkins-hbase17:35009] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:11,741 DEBUG [RS:3;jenkins-hbase17:35009] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:11,741 DEBUG [RS:3;jenkins-hbase17:35009] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:11,741 DEBUG [RS:3;jenkins-hbase17:35009] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:11,741 DEBUG [RS:3;jenkins-hbase17:35009] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:11,741 DEBUG [RS:3;jenkins-hbase17:35009] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:11,741 DEBUG [RS:3;jenkins-hbase17:35009] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:11,741 DEBUG [RS:3;jenkins-hbase17:35009] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:11,741 DEBUG [RS:3;jenkins-hbase17:35009] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:11,741 DEBUG [RS:3;jenkins-hbase17:35009] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:11,749 INFO [RS:3;jenkins-hbase17:35009] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:11,750 INFO [RS:3;jenkins-hbase17:35009] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:11,750 INFO [RS:3;jenkins-hbase17:35009] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:11,764 INFO [RS:3;jenkins-hbase17:35009] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:17:11,765 INFO [RS:3;jenkins-hbase17:35009] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35009,1689938231406-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:11,778 INFO [RS:3;jenkins-hbase17:35009] regionserver.Replication(203): jenkins-hbase17.apache.org,35009,1689938231406 started 2023-07-21 11:17:11,779 INFO [RS:3;jenkins-hbase17:35009] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,35009,1689938231406, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:35009, sessionid=0x101879855f5000b 2023-07-21 11:17:11,779 DEBUG [RS:3;jenkins-hbase17:35009] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:17:11,779 DEBUG [RS:3;jenkins-hbase17:35009] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:11,779 DEBUG [RS:3;jenkins-hbase17:35009] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,35009,1689938231406' 2023-07-21 11:17:11,779 DEBUG [RS:3;jenkins-hbase17:35009] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:17:11,779 DEBUG [RS:3;jenkins-hbase17:35009] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:17:11,780 DEBUG [RS:3;jenkins-hbase17:35009] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:17:11,780 DEBUG [RS:3;jenkins-hbase17:35009] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:17:11,780 DEBUG [RS:3;jenkins-hbase17:35009] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:11,780 DEBUG [RS:3;jenkins-hbase17:35009] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,35009,1689938231406' 2023-07-21 11:17:11,780 DEBUG [RS:3;jenkins-hbase17:35009] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:17:11,780 DEBUG [RS:3;jenkins-hbase17:35009] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:17:11,781 DEBUG [RS:3;jenkins-hbase17:35009] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:17:11,781 INFO [RS:3;jenkins-hbase17:35009] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:17:11,781 INFO [RS:3;jenkins-hbase17:35009] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:17:11,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:11,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:11,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:11,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:11,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:11,800 DEBUG [hconnection-0xc2f4991-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:11,809 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:53384, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:11,814 DEBUG [hconnection-0xc2f4991-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:11,817 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:52818, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:11,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:11,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:11,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:11,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:11,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:42872 deadline: 1689939431831, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:11,833 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:11,836 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:11,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:11,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:11,838 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:11,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:11,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:11,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:11,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:11,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:11,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:11,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:11,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:11,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:11,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:11,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:11,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:11,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009] to rsgroup Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:11,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:11,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:11,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:11,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:11,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(238): Moving server region 0d251b6fcd6df4af958f1fccdfdc34e4, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:11,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0d251b6fcd6df4af958f1fccdfdc34e4, REOPEN/MOVE 2023-07-21 11:17:11,884 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0d251b6fcd6df4af958f1fccdfdc34e4, REOPEN/MOVE 2023-07-21 11:17:11,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 11:17:11,885 INFO [RS:3;jenkins-hbase17:35009] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C35009%2C1689938231406, suffix=, logDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,35009,1689938231406, archiveDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/oldWALs, maxLogs=32 2023-07-21 11:17:11,888 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=0d251b6fcd6df4af958f1fccdfdc34e4, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:11,889 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938231888"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938231888"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938231888"}]},"ts":"1689938231888"} 2023-07-21 11:17:11,892 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 0d251b6fcd6df4af958f1fccdfdc34e4, server=jenkins-hbase17.apache.org,33011,1689938225358}] 2023-07-21 11:17:11,919 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43969,DS-439a2015-2672-456c-b982-719bc01aa0de,DISK] 2023-07-21 11:17:11,920 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45611,DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f,DISK] 2023-07-21 11:17:11,920 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37605,DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3,DISK] 2023-07-21 11:17:11,925 INFO [RS:3;jenkins-hbase17:35009] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,35009,1689938231406/jenkins-hbase17.apache.org%2C35009%2C1689938231406.1689938231887 2023-07-21 11:17:11,925 DEBUG [RS:3;jenkins-hbase17:35009] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43969,DS-439a2015-2672-456c-b982-719bc01aa0de,DISK], DatanodeInfoWithStorage[127.0.0.1:45611,DS-3a5e5901-7d5f-4e20-a6db-3b190cccdb7f,DISK], DatanodeInfoWithStorage[127.0.0.1:37605,DS-a4071ad1-4e91-433c-8330-a1f25eaa2ec3,DISK]] 2023-07-21 11:17:12,055 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:12,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 0d251b6fcd6df4af958f1fccdfdc34e4, disabling compactions & flushes 2023-07-21 11:17:12,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:12,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:12,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. after waiting 0 ms 2023-07-21 11:17:12,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:12,057 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 0d251b6fcd6df4af958f1fccdfdc34e4 1/1 column families, dataSize=1.40 KB heapSize=2.39 KB 2023-07-21 11:17:12,220 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.40 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/.tmp/m/18080b713b7b468e9af86a18ffb475be 2023-07-21 11:17:12,305 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/.tmp/m/18080b713b7b468e9af86a18ffb475be as hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m/18080b713b7b468e9af86a18ffb475be 2023-07-21 11:17:12,320 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m/18080b713b7b468e9af86a18ffb475be, entries=3, sequenceid=9, filesize=5.2 K 2023-07-21 11:17:12,328 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.40 KB/1433, heapSize ~2.38 KB/2432, currentSize=0 B/0 for 0d251b6fcd6df4af958f1fccdfdc34e4 in 271ms, sequenceid=9, compaction requested=false 2023-07-21 11:17:12,329 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 11:17:12,344 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-21 11:17:12,345 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:17:12,346 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:12,346 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 0d251b6fcd6df4af958f1fccdfdc34e4: 2023-07-21 11:17:12,346 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 0d251b6fcd6df4af958f1fccdfdc34e4 move to jenkins-hbase17.apache.org,36863,1689938225106 record at close sequenceid=9 2023-07-21 11:17:12,349 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:12,350 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=0d251b6fcd6df4af958f1fccdfdc34e4, regionState=CLOSED 2023-07-21 11:17:12,351 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938232350"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938232350"}]},"ts":"1689938232350"} 2023-07-21 11:17:12,363 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 11:17:12,364 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 0d251b6fcd6df4af958f1fccdfdc34e4, server=jenkins-hbase17.apache.org,33011,1689938225358 in 465 msec 2023-07-21 11:17:12,365 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0d251b6fcd6df4af958f1fccdfdc34e4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,36863,1689938225106; forceNewPlan=false, retain=false 2023-07-21 11:17:12,515 INFO [jenkins-hbase17:40703] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:12,516 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=0d251b6fcd6df4af958f1fccdfdc34e4, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:12,516 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938232516"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938232516"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938232516"}]},"ts":"1689938232516"} 2023-07-21 11:17:12,521 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 0d251b6fcd6df4af958f1fccdfdc34e4, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:12,676 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:12,676 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:12,708 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57438, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:12,715 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:12,716 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0d251b6fcd6df4af958f1fccdfdc34e4, NAME => 'hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:12,716 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:17:12,716 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. service=MultiRowMutationService 2023-07-21 11:17:12,716 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 11:17:12,716 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:12,716 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:12,716 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:12,716 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:12,719 INFO [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:12,722 DEBUG [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m 2023-07-21 11:17:12,722 DEBUG [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m 2023-07-21 11:17:12,723 INFO [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0d251b6fcd6df4af958f1fccdfdc34e4 columnFamilyName m 2023-07-21 11:17:12,735 DEBUG [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m/18080b713b7b468e9af86a18ffb475be 2023-07-21 11:17:12,736 INFO [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] regionserver.HStore(310): Store=0d251b6fcd6df4af958f1fccdfdc34e4/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:12,738 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:12,742 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:12,749 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:12,750 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 0d251b6fcd6df4af958f1fccdfdc34e4; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4fab5a53, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:12,750 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 0d251b6fcd6df4af958f1fccdfdc34e4: 2023-07-21 11:17:12,756 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4., pid=14, masterSystemTime=1689938232675 2023-07-21 11:17:12,763 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:12,764 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:12,764 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=0d251b6fcd6df4af958f1fccdfdc34e4, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:12,765 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938232764"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938232764"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938232764"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938232764"}]},"ts":"1689938232764"} 2023-07-21 11:17:12,774 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-21 11:17:12,774 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 0d251b6fcd6df4af958f1fccdfdc34e4, server=jenkins-hbase17.apache.org,36863,1689938225106 in 248 msec 2023-07-21 11:17:12,778 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0d251b6fcd6df4af958f1fccdfdc34e4, REOPEN/MOVE in 893 msec 2023-07-21 11:17:12,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-21 11:17:12,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33011,1689938225358, jenkins-hbase17.apache.org,35009,1689938231406] are moved back to default 2023-07-21 11:17:12,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:12,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:12,889 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33011] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 136.243.18.41:52818 deadline: 1689938292887, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=36863 startCode=1689938225106. As of locationSeqNum=9. 2023-07-21 11:17:12,981 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 11:17:12,981 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-21 11:17:12,982 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:17:12,982 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-21 11:17:12,982 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 11:17:12,982 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-21 11:17:12,996 DEBUG [hconnection-0xc2f4991-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:13,009 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57446, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:13,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:13,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:13,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:13,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:13,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:13,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:13,082 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:13,085 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33011] ipc.CallRunner(144): callId: 43 service: ClientService methodName: ExecService size: 624 connection: 136.243.18.41:52808 deadline: 1689938293085, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=36863 startCode=1689938225106. As of locationSeqNum=9. 2023-07-21 11:17:13,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-21 11:17:13,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 11:17:13,190 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:13,192 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57452, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:13,200 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:13,201 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:13,201 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:13,202 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:13,210 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:13,222 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:13,222 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:13,224 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:13,224 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972 empty. 2023-07-21 11:17:13,225 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:13,225 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6 empty. 2023-07-21 11:17:13,225 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661 2023-07-21 11:17:13,226 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348 empty. 2023-07-21 11:17:13,226 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:13,226 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:13,226 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:13,227 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661 empty. 2023-07-21 11:17:13,227 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1 empty. 2023-07-21 11:17:13,227 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661 2023-07-21 11:17:13,228 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:13,228 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 11:17:13,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 11:17:13,290 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:13,292 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => ad30f5a32ab3318e29f6bb37c63129b6, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:13,292 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => b34b491f20b5b207a4739b04422e6972, NAME => 'Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:13,293 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => ebcba8f23ad518b980bd222b45d4d348, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:13,353 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:13,353 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:13,354 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing ad30f5a32ab3318e29f6bb37c63129b6, disabling compactions & flushes 2023-07-21 11:17:13,354 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:13,355 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:13,354 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing ebcba8f23ad518b980bd222b45d4d348, disabling compactions & flushes 2023-07-21 11:17:13,355 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:13,355 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing b34b491f20b5b207a4739b04422e6972, disabling compactions & flushes 2023-07-21 11:17:13,355 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. after waiting 0 ms 2023-07-21 11:17:13,355 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:13,355 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:13,355 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:13,355 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:13,355 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:13,355 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for ad30f5a32ab3318e29f6bb37c63129b6: 2023-07-21 11:17:13,355 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:13,355 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. after waiting 0 ms 2023-07-21 11:17:13,355 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:13,355 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:13,355 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for ebcba8f23ad518b980bd222b45d4d348: 2023-07-21 11:17:13,356 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. after waiting 0 ms 2023-07-21 11:17:13,356 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:13,356 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 687be8526fbd580e2020048aab72f661, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:13,356 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:13,356 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 447b76afd454e02f371ce70f46a3aec1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:13,356 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for b34b491f20b5b207a4739b04422e6972: 2023-07-21 11:17:13,395 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:13,396 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 447b76afd454e02f371ce70f46a3aec1, disabling compactions & flushes 2023-07-21 11:17:13,397 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:13,398 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:13,398 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. after waiting 0 ms 2023-07-21 11:17:13,398 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:13,398 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:13,398 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 447b76afd454e02f371ce70f46a3aec1: 2023-07-21 11:17:13,402 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:13,402 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 687be8526fbd580e2020048aab72f661, disabling compactions & flushes 2023-07-21 11:17:13,402 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:13,402 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:13,403 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. after waiting 0 ms 2023-07-21 11:17:13,403 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:13,403 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:13,403 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 687be8526fbd580e2020048aab72f661: 2023-07-21 11:17:13,407 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:13,409 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938233408"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938233408"}]},"ts":"1689938233408"} 2023-07-21 11:17:13,409 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938233408"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938233408"}]},"ts":"1689938233408"} 2023-07-21 11:17:13,409 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938233408"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938233408"}]},"ts":"1689938233408"} 2023-07-21 11:17:13,409 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938233408"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938233408"}]},"ts":"1689938233408"} 2023-07-21 11:17:13,409 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938233408"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938233408"}]},"ts":"1689938233408"} 2023-07-21 11:17:13,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 11:17:13,460 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 11:17:13,462 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:13,463 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938233462"}]},"ts":"1689938233462"} 2023-07-21 11:17:13,465 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-21 11:17:13,470 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:13,470 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:13,471 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:13,471 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:13,471 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b34b491f20b5b207a4739b04422e6972, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebcba8f23ad518b980bd222b45d4d348, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ad30f5a32ab3318e29f6bb37c63129b6, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=447b76afd454e02f371ce70f46a3aec1, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687be8526fbd580e2020048aab72f661, ASSIGN}] 2023-07-21 11:17:13,474 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebcba8f23ad518b980bd222b45d4d348, ASSIGN 2023-07-21 11:17:13,475 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687be8526fbd580e2020048aab72f661, ASSIGN 2023-07-21 11:17:13,475 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b34b491f20b5b207a4739b04422e6972, ASSIGN 2023-07-21 11:17:13,476 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=447b76afd454e02f371ce70f46a3aec1, ASSIGN 2023-07-21 11:17:13,477 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ad30f5a32ab3318e29f6bb37c63129b6, ASSIGN 2023-07-21 11:17:13,477 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebcba8f23ad518b980bd222b45d4d348, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46255,1689938224878; forceNewPlan=false, retain=false 2023-07-21 11:17:13,477 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687be8526fbd580e2020048aab72f661, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36863,1689938225106; forceNewPlan=false, retain=false 2023-07-21 11:17:13,477 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b34b491f20b5b207a4739b04422e6972, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36863,1689938225106; forceNewPlan=false, retain=false 2023-07-21 11:17:13,477 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=447b76afd454e02f371ce70f46a3aec1, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46255,1689938224878; forceNewPlan=false, retain=false 2023-07-21 11:17:13,479 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ad30f5a32ab3318e29f6bb37c63129b6, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36863,1689938225106; forceNewPlan=false, retain=false 2023-07-21 11:17:13,628 INFO [jenkins-hbase17:40703] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 11:17:13,644 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=447b76afd454e02f371ce70f46a3aec1, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:13,645 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938233644"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938233644"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938233644"}]},"ts":"1689938233644"} 2023-07-21 11:17:13,645 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=ebcba8f23ad518b980bd222b45d4d348, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:13,646 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=ad30f5a32ab3318e29f6bb37c63129b6, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:13,646 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938233645"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938233645"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938233645"}]},"ts":"1689938233645"} 2023-07-21 11:17:13,646 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938233646"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938233646"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938233646"}]},"ts":"1689938233646"} 2023-07-21 11:17:13,646 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=b34b491f20b5b207a4739b04422e6972, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:13,646 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938233646"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938233646"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938233646"}]},"ts":"1689938233646"} 2023-07-21 11:17:13,645 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=687be8526fbd580e2020048aab72f661, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:13,649 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938233645"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938233645"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938233645"}]},"ts":"1689938233645"} 2023-07-21 11:17:13,662 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=19, state=RUNNABLE; OpenRegionProcedure 447b76afd454e02f371ce70f46a3aec1, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:13,667 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=17, state=RUNNABLE; OpenRegionProcedure ebcba8f23ad518b980bd222b45d4d348, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:13,675 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=18, state=RUNNABLE; OpenRegionProcedure ad30f5a32ab3318e29f6bb37c63129b6, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:13,691 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=16, state=RUNNABLE; OpenRegionProcedure b34b491f20b5b207a4739b04422e6972, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:13,696 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=20, state=RUNNABLE; OpenRegionProcedure 687be8526fbd580e2020048aab72f661, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:13,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 11:17:13,826 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:13,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ebcba8f23ad518b980bd222b45d4d348, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 11:17:13,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:13,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:13,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:13,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:13,829 INFO [StoreOpener-ebcba8f23ad518b980bd222b45d4d348-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:13,831 DEBUG [StoreOpener-ebcba8f23ad518b980bd222b45d4d348-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348/f 2023-07-21 11:17:13,831 DEBUG [StoreOpener-ebcba8f23ad518b980bd222b45d4d348-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348/f 2023-07-21 11:17:13,832 INFO [StoreOpener-ebcba8f23ad518b980bd222b45d4d348-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ebcba8f23ad518b980bd222b45d4d348 columnFamilyName f 2023-07-21 11:17:13,833 INFO [StoreOpener-ebcba8f23ad518b980bd222b45d4d348-1] regionserver.HStore(310): Store=ebcba8f23ad518b980bd222b45d4d348/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:13,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:13,835 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:13,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:13,842 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:13,843 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b34b491f20b5b207a4739b04422e6972, NAME => 'Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 11:17:13,843 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:13,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:13,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:13,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:13,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:13,846 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened ebcba8f23ad518b980bd222b45d4d348; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11940818560, jitterRate=0.11207538843154907}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:13,846 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for ebcba8f23ad518b980bd222b45d4d348: 2023-07-21 11:17:13,846 INFO [StoreOpener-b34b491f20b5b207a4739b04422e6972-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:13,847 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348., pid=22, masterSystemTime=1689938233821 2023-07-21 11:17:13,848 DEBUG [StoreOpener-b34b491f20b5b207a4739b04422e6972-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972/f 2023-07-21 11:17:13,848 DEBUG [StoreOpener-b34b491f20b5b207a4739b04422e6972-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972/f 2023-07-21 11:17:13,849 INFO [StoreOpener-b34b491f20b5b207a4739b04422e6972-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b34b491f20b5b207a4739b04422e6972 columnFamilyName f 2023-07-21 11:17:13,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:13,850 INFO [StoreOpener-b34b491f20b5b207a4739b04422e6972-1] regionserver.HStore(310): Store=b34b491f20b5b207a4739b04422e6972/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:13,850 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:13,851 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:13,851 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 447b76afd454e02f371ce70f46a3aec1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 11:17:13,851 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=ebcba8f23ad518b980bd222b45d4d348, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:13,852 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938233851"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938233851"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938233851"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938233851"}]},"ts":"1689938233851"} 2023-07-21 11:17:13,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:13,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:13,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:13,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:13,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:13,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:13,855 INFO [StoreOpener-447b76afd454e02f371ce70f46a3aec1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:13,857 DEBUG [StoreOpener-447b76afd454e02f371ce70f46a3aec1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1/f 2023-07-21 11:17:13,858 DEBUG [StoreOpener-447b76afd454e02f371ce70f46a3aec1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1/f 2023-07-21 11:17:13,858 INFO [StoreOpener-447b76afd454e02f371ce70f46a3aec1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 447b76afd454e02f371ce70f46a3aec1 columnFamilyName f 2023-07-21 11:17:13,861 INFO [StoreOpener-447b76afd454e02f371ce70f46a3aec1-1] regionserver.HStore(310): Store=447b76afd454e02f371ce70f46a3aec1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:13,861 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=17 2023-07-21 11:17:13,862 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=17, state=SUCCESS; OpenRegionProcedure ebcba8f23ad518b980bd222b45d4d348, server=jenkins-hbase17.apache.org,46255,1689938224878 in 188 msec 2023-07-21 11:17:13,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:13,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:13,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:13,864 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebcba8f23ad518b980bd222b45d4d348, ASSIGN in 390 msec 2023-07-21 11:17:13,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:13,868 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened b34b491f20b5b207a4739b04422e6972; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10434411680, jitterRate=-0.028219684958457947}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:13,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for b34b491f20b5b207a4739b04422e6972: 2023-07-21 11:17:13,869 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972., pid=24, masterSystemTime=1689938233838 2023-07-21 11:17:13,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:13,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:13,871 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:13,872 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:13,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ad30f5a32ab3318e29f6bb37c63129b6, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 11:17:13,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:13,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:13,872 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=b34b491f20b5b207a4739b04422e6972, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:13,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:13,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:13,873 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938233872"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938233872"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938233872"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938233872"}]},"ts":"1689938233872"} 2023-07-21 11:17:13,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:13,875 INFO [StoreOpener-ad30f5a32ab3318e29f6bb37c63129b6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:13,875 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 447b76afd454e02f371ce70f46a3aec1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10621471840, jitterRate=-0.010798349976539612}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:13,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 447b76afd454e02f371ce70f46a3aec1: 2023-07-21 11:17:13,878 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1., pid=21, masterSystemTime=1689938233821 2023-07-21 11:17:13,879 DEBUG [StoreOpener-ad30f5a32ab3318e29f6bb37c63129b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6/f 2023-07-21 11:17:13,880 DEBUG [StoreOpener-ad30f5a32ab3318e29f6bb37c63129b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6/f 2023-07-21 11:17:13,880 INFO [StoreOpener-ad30f5a32ab3318e29f6bb37c63129b6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ad30f5a32ab3318e29f6bb37c63129b6 columnFamilyName f 2023-07-21 11:17:13,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:13,881 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:13,881 INFO [StoreOpener-ad30f5a32ab3318e29f6bb37c63129b6-1] regionserver.HStore(310): Store=ad30f5a32ab3318e29f6bb37c63129b6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:13,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:13,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:13,892 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=16 2023-07-21 11:17:13,892 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=447b76afd454e02f371ce70f46a3aec1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:13,892 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=16, state=SUCCESS; OpenRegionProcedure b34b491f20b5b207a4739b04422e6972, server=jenkins-hbase17.apache.org,36863,1689938225106 in 184 msec 2023-07-21 11:17:13,893 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938233892"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938233892"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938233892"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938233892"}]},"ts":"1689938233892"} 2023-07-21 11:17:13,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:13,896 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b34b491f20b5b207a4739b04422e6972, ASSIGN in 421 msec 2023-07-21 11:17:13,900 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=19 2023-07-21 11:17:13,900 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=19, state=SUCCESS; OpenRegionProcedure 447b76afd454e02f371ce70f46a3aec1, server=jenkins-hbase17.apache.org,46255,1689938224878 in 233 msec 2023-07-21 11:17:13,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:13,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened ad30f5a32ab3318e29f6bb37c63129b6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10365979520, jitterRate=-0.03459292650222778}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:13,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for ad30f5a32ab3318e29f6bb37c63129b6: 2023-07-21 11:17:13,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6., pid=23, masterSystemTime=1689938233838 2023-07-21 11:17:13,903 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=447b76afd454e02f371ce70f46a3aec1, ASSIGN in 429 msec 2023-07-21 11:17:13,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:13,905 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:13,906 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:13,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 687be8526fbd580e2020048aab72f661, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 11:17:13,906 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=ad30f5a32ab3318e29f6bb37c63129b6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:13,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:13,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:13,906 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938233906"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938233906"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938233906"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938233906"}]},"ts":"1689938233906"} 2023-07-21 11:17:13,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:13,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:13,909 INFO [StoreOpener-687be8526fbd580e2020048aab72f661-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:13,914 DEBUG [StoreOpener-687be8526fbd580e2020048aab72f661-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661/f 2023-07-21 11:17:13,914 DEBUG [StoreOpener-687be8526fbd580e2020048aab72f661-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661/f 2023-07-21 11:17:13,914 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=18 2023-07-21 11:17:13,914 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=18, state=SUCCESS; OpenRegionProcedure ad30f5a32ab3318e29f6bb37c63129b6, server=jenkins-hbase17.apache.org,36863,1689938225106 in 235 msec 2023-07-21 11:17:13,915 INFO [StoreOpener-687be8526fbd580e2020048aab72f661-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 687be8526fbd580e2020048aab72f661 columnFamilyName f 2023-07-21 11:17:13,916 INFO [StoreOpener-687be8526fbd580e2020048aab72f661-1] regionserver.HStore(310): Store=687be8526fbd580e2020048aab72f661/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:13,917 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ad30f5a32ab3318e29f6bb37c63129b6, ASSIGN in 443 msec 2023-07-21 11:17:13,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661 2023-07-21 11:17:13,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661 2023-07-21 11:17:13,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:13,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:13,928 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 687be8526fbd580e2020048aab72f661; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9537043360, jitterRate=-0.1117936223745346}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:13,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 687be8526fbd580e2020048aab72f661: 2023-07-21 11:17:13,929 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661., pid=25, masterSystemTime=1689938233838 2023-07-21 11:17:13,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:13,933 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:13,933 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=687be8526fbd580e2020048aab72f661, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:13,934 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938233933"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938233933"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938233933"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938233933"}]},"ts":"1689938233933"} 2023-07-21 11:17:13,942 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=20 2023-07-21 11:17:13,942 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=20, state=SUCCESS; OpenRegionProcedure 687be8526fbd580e2020048aab72f661, server=jenkins-hbase17.apache.org,36863,1689938225106 in 241 msec 2023-07-21 11:17:13,947 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=15 2023-07-21 11:17:13,947 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687be8526fbd580e2020048aab72f661, ASSIGN in 471 msec 2023-07-21 11:17:13,949 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:13,949 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938233949"}]},"ts":"1689938233949"} 2023-07-21 11:17:13,952 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-21 11:17:13,956 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:13,960 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 899 msec 2023-07-21 11:17:14,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 11:17:14,243 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-21 11:17:14,243 DEBUG [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-21 11:17:14,244 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:14,250 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-21 11:17:14,251 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:14,251 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-21 11:17:14,252 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:14,257 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:14,264 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:52826, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:14,268 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:14,272 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38346, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:14,273 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:14,277 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57460, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:14,279 DEBUG [Listener at localhost.localdomain/38409] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:14,284 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:53398, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:14,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:14,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:14,298 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:14,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:14,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:14,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:14,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:14,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:14,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:14,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(345): Moving region b34b491f20b5b207a4739b04422e6972 to RSGroup Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:14,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:14,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:14,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:14,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:14,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:14,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b34b491f20b5b207a4739b04422e6972, REOPEN/MOVE 2023-07-21 11:17:14,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(345): Moving region ebcba8f23ad518b980bd222b45d4d348 to RSGroup Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:14,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:14,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:14,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:14,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:14,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:14,326 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b34b491f20b5b207a4739b04422e6972, REOPEN/MOVE 2023-07-21 11:17:14,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebcba8f23ad518b980bd222b45d4d348, REOPEN/MOVE 2023-07-21 11:17:14,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(345): Moving region ad30f5a32ab3318e29f6bb37c63129b6 to RSGroup Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:14,331 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebcba8f23ad518b980bd222b45d4d348, REOPEN/MOVE 2023-07-21 11:17:14,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:14,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:14,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:14,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:14,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:14,333 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=b34b491f20b5b207a4739b04422e6972, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:14,333 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938234332"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938234332"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938234332"}]},"ts":"1689938234332"} 2023-07-21 11:17:14,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ad30f5a32ab3318e29f6bb37c63129b6, REOPEN/MOVE 2023-07-21 11:17:14,335 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=ebcba8f23ad518b980bd222b45d4d348, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:14,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(345): Moving region 447b76afd454e02f371ce70f46a3aec1 to RSGroup Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:14,336 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ad30f5a32ab3318e29f6bb37c63129b6, REOPEN/MOVE 2023-07-21 11:17:14,336 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938234335"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938234335"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938234335"}]},"ts":"1689938234335"} 2023-07-21 11:17:14,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:14,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:14,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:14,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:14,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:14,338 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=26, state=RUNNABLE; CloseRegionProcedure b34b491f20b5b207a4739b04422e6972, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:14,339 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=ad30f5a32ab3318e29f6bb37c63129b6, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:14,339 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938234339"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938234339"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938234339"}]},"ts":"1689938234339"} 2023-07-21 11:17:14,341 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure ebcba8f23ad518b980bd222b45d4d348, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:14,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=447b76afd454e02f371ce70f46a3aec1, REOPEN/MOVE 2023-07-21 11:17:14,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(345): Moving region 687be8526fbd580e2020048aab72f661 to RSGroup Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:14,343 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=447b76afd454e02f371ce70f46a3aec1, REOPEN/MOVE 2023-07-21 11:17:14,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:14,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:14,344 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=28, state=RUNNABLE; CloseRegionProcedure ad30f5a32ab3318e29f6bb37c63129b6, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:14,345 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=447b76afd454e02f371ce70f46a3aec1, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:14,345 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938234345"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938234345"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938234345"}]},"ts":"1689938234345"} 2023-07-21 11:17:14,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:14,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:14,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:14,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687be8526fbd580e2020048aab72f661, REOPEN/MOVE 2023-07-21 11:17:14,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_287573559, current retry=0 2023-07-21 11:17:14,350 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687be8526fbd580e2020048aab72f661, REOPEN/MOVE 2023-07-21 11:17:14,350 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure 447b76afd454e02f371ce70f46a3aec1, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:14,351 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=687be8526fbd580e2020048aab72f661, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:14,352 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938234351"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938234351"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938234351"}]},"ts":"1689938234351"} 2023-07-21 11:17:14,356 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=33, state=RUNNABLE; CloseRegionProcedure 687be8526fbd580e2020048aab72f661, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:14,445 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 11:17:14,446 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 11:17:14,447 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-21 11:17:14,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:14,497 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing ad30f5a32ab3318e29f6bb37c63129b6, disabling compactions & flushes 2023-07-21 11:17:14,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:14,497 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:14,497 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. after waiting 0 ms 2023-07-21 11:17:14,497 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:14,502 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:14,513 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing ebcba8f23ad518b980bd222b45d4d348, disabling compactions & flushes 2023-07-21 11:17:14,513 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:14,513 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:14,513 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. after waiting 0 ms 2023-07-21 11:17:14,513 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:14,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:14,523 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:14,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for ad30f5a32ab3318e29f6bb37c63129b6: 2023-07-21 11:17:14,523 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding ad30f5a32ab3318e29f6bb37c63129b6 move to jenkins-hbase17.apache.org,35009,1689938231406 record at close sequenceid=2 2023-07-21 11:17:14,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:14,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:14,533 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=ad30f5a32ab3318e29f6bb37c63129b6, regionState=CLOSED 2023-07-21 11:17:14,533 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938234533"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938234533"}]},"ts":"1689938234533"} 2023-07-21 11:17:14,541 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=28 2023-07-21 11:17:14,541 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=28, state=SUCCESS; CloseRegionProcedure ad30f5a32ab3318e29f6bb37c63129b6, server=jenkins-hbase17.apache.org,36863,1689938225106 in 192 msec 2023-07-21 11:17:14,543 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ad30f5a32ab3318e29f6bb37c63129b6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,35009,1689938231406; forceNewPlan=false, retain=false 2023-07-21 11:17:14,558 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 687be8526fbd580e2020048aab72f661, disabling compactions & flushes 2023-07-21 11:17:14,559 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:14,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:14,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:14,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. after waiting 0 ms 2023-07-21 11:17:14,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:14,561 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:14,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for ebcba8f23ad518b980bd222b45d4d348: 2023-07-21 11:17:14,561 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding ebcba8f23ad518b980bd222b45d4d348 move to jenkins-hbase17.apache.org,35009,1689938231406 record at close sequenceid=2 2023-07-21 11:17:14,568 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:14,568 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:14,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 447b76afd454e02f371ce70f46a3aec1, disabling compactions & flushes 2023-07-21 11:17:14,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:14,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:14,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. after waiting 0 ms 2023-07-21 11:17:14,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:14,586 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=ebcba8f23ad518b980bd222b45d4d348, regionState=CLOSED 2023-07-21 11:17:14,586 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938234586"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938234586"}]},"ts":"1689938234586"} 2023-07-21 11:17:14,602 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-21 11:17:14,602 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure ebcba8f23ad518b980bd222b45d4d348, server=jenkins-hbase17.apache.org,46255,1689938224878 in 256 msec 2023-07-21 11:17:14,604 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebcba8f23ad518b980bd222b45d4d348, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,35009,1689938231406; forceNewPlan=false, retain=false 2023-07-21 11:17:14,610 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:14,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:14,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 447b76afd454e02f371ce70f46a3aec1: 2023-07-21 11:17:14,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 447b76afd454e02f371ce70f46a3aec1 move to jenkins-hbase17.apache.org,35009,1689938231406 record at close sequenceid=2 2023-07-21 11:17:14,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:14,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:14,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:14,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 687be8526fbd580e2020048aab72f661: 2023-07-21 11:17:14,620 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=447b76afd454e02f371ce70f46a3aec1, regionState=CLOSED 2023-07-21 11:17:14,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 687be8526fbd580e2020048aab72f661 move to jenkins-hbase17.apache.org,35009,1689938231406 record at close sequenceid=2 2023-07-21 11:17:14,620 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938234620"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938234620"}]},"ts":"1689938234620"} 2023-07-21 11:17:14,625 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:14,625 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:14,626 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=687be8526fbd580e2020048aab72f661, regionState=CLOSED 2023-07-21 11:17:14,626 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938234626"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938234626"}]},"ts":"1689938234626"} 2023-07-21 11:17:14,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing b34b491f20b5b207a4739b04422e6972, disabling compactions & flushes 2023-07-21 11:17:14,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:14,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:14,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. after waiting 0 ms 2023-07-21 11:17:14,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:14,639 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-21 11:17:14,639 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure 447b76afd454e02f371ce70f46a3aec1, server=jenkins-hbase17.apache.org,46255,1689938224878 in 273 msec 2023-07-21 11:17:14,643 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=447b76afd454e02f371ce70f46a3aec1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,35009,1689938231406; forceNewPlan=false, retain=false 2023-07-21 11:17:14,646 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=33 2023-07-21 11:17:14,647 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=33, state=SUCCESS; CloseRegionProcedure 687be8526fbd580e2020048aab72f661, server=jenkins-hbase17.apache.org,36863,1689938225106 in 279 msec 2023-07-21 11:17:14,653 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687be8526fbd580e2020048aab72f661, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,35009,1689938231406; forceNewPlan=false, retain=false 2023-07-21 11:17:14,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:14,670 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:14,671 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for b34b491f20b5b207a4739b04422e6972: 2023-07-21 11:17:14,671 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding b34b491f20b5b207a4739b04422e6972 move to jenkins-hbase17.apache.org,35009,1689938231406 record at close sequenceid=2 2023-07-21 11:17:14,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:14,679 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=b34b491f20b5b207a4739b04422e6972, regionState=CLOSED 2023-07-21 11:17:14,679 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938234679"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938234679"}]},"ts":"1689938234679"} 2023-07-21 11:17:14,686 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=26 2023-07-21 11:17:14,686 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=26, state=SUCCESS; CloseRegionProcedure b34b491f20b5b207a4739b04422e6972, server=jenkins-hbase17.apache.org,36863,1689938225106 in 343 msec 2023-07-21 11:17:14,687 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b34b491f20b5b207a4739b04422e6972, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,35009,1689938231406; forceNewPlan=false, retain=false 2023-07-21 11:17:14,693 INFO [jenkins-hbase17:40703] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 11:17:14,694 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=ad30f5a32ab3318e29f6bb37c63129b6, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:14,694 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=687be8526fbd580e2020048aab72f661, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:14,694 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=447b76afd454e02f371ce70f46a3aec1, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:14,694 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938234694"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938234694"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938234694"}]},"ts":"1689938234694"} 2023-07-21 11:17:14,694 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938234694"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938234694"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938234694"}]},"ts":"1689938234694"} 2023-07-21 11:17:14,694 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938234694"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938234694"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938234694"}]},"ts":"1689938234694"} 2023-07-21 11:17:14,694 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=ebcba8f23ad518b980bd222b45d4d348, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:14,694 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=b34b491f20b5b207a4739b04422e6972, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:14,695 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938234694"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938234694"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938234694"}]},"ts":"1689938234694"} 2023-07-21 11:17:14,695 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938234694"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938234694"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938234694"}]},"ts":"1689938234694"} 2023-07-21 11:17:14,697 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=28, state=RUNNABLE; OpenRegionProcedure ad30f5a32ab3318e29f6bb37c63129b6, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:14,698 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=30, state=RUNNABLE; OpenRegionProcedure 447b76afd454e02f371ce70f46a3aec1, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:14,699 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=33, state=RUNNABLE; OpenRegionProcedure 687be8526fbd580e2020048aab72f661, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:14,701 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=27, state=RUNNABLE; OpenRegionProcedure ebcba8f23ad518b980bd222b45d4d348, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:14,703 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=26, state=RUNNABLE; OpenRegionProcedure b34b491f20b5b207a4739b04422e6972, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:14,849 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:14,850 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:14,851 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38348, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:14,857 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:14,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ad30f5a32ab3318e29f6bb37c63129b6, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 11:17:14,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:14,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:14,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:14,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:14,860 INFO [StoreOpener-ad30f5a32ab3318e29f6bb37c63129b6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:14,863 DEBUG [StoreOpener-ad30f5a32ab3318e29f6bb37c63129b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6/f 2023-07-21 11:17:14,863 DEBUG [StoreOpener-ad30f5a32ab3318e29f6bb37c63129b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6/f 2023-07-21 11:17:14,865 INFO [StoreOpener-ad30f5a32ab3318e29f6bb37c63129b6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ad30f5a32ab3318e29f6bb37c63129b6 columnFamilyName f 2023-07-21 11:17:14,866 INFO [StoreOpener-ad30f5a32ab3318e29f6bb37c63129b6-1] regionserver.HStore(310): Store=ad30f5a32ab3318e29f6bb37c63129b6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:14,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:14,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:14,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:14,885 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened ad30f5a32ab3318e29f6bb37c63129b6; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9586980480, jitterRate=-0.1071428656578064}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:14,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for ad30f5a32ab3318e29f6bb37c63129b6: 2023-07-21 11:17:14,888 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6., pid=36, masterSystemTime=1689938234849 2023-07-21 11:17:14,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:14,894 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:14,894 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:14,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ebcba8f23ad518b980bd222b45d4d348, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 11:17:14,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:14,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:14,895 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=ad30f5a32ab3318e29f6bb37c63129b6, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:14,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:14,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:14,895 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938234895"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938234895"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938234895"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938234895"}]},"ts":"1689938234895"} 2023-07-21 11:17:14,898 INFO [StoreOpener-ebcba8f23ad518b980bd222b45d4d348-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:14,900 DEBUG [StoreOpener-ebcba8f23ad518b980bd222b45d4d348-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348/f 2023-07-21 11:17:14,900 DEBUG [StoreOpener-ebcba8f23ad518b980bd222b45d4d348-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348/f 2023-07-21 11:17:14,901 INFO [StoreOpener-ebcba8f23ad518b980bd222b45d4d348-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ebcba8f23ad518b980bd222b45d4d348 columnFamilyName f 2023-07-21 11:17:14,903 INFO [StoreOpener-ebcba8f23ad518b980bd222b45d4d348-1] regionserver.HStore(310): Store=ebcba8f23ad518b980bd222b45d4d348/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:14,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:14,906 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=28 2023-07-21 11:17:14,906 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=28, state=SUCCESS; OpenRegionProcedure ad30f5a32ab3318e29f6bb37c63129b6, server=jenkins-hbase17.apache.org,35009,1689938231406 in 201 msec 2023-07-21 11:17:14,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:14,915 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ad30f5a32ab3318e29f6bb37c63129b6, REOPEN/MOVE in 575 msec 2023-07-21 11:17:14,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:14,923 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened ebcba8f23ad518b980bd222b45d4d348; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11295959040, jitterRate=0.052018165588378906}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:14,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for ebcba8f23ad518b980bd222b45d4d348: 2023-07-21 11:17:14,924 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348., pid=39, masterSystemTime=1689938234849 2023-07-21 11:17:14,929 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=ebcba8f23ad518b980bd222b45d4d348, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:14,930 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938234929"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938234929"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938234929"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938234929"}]},"ts":"1689938234929"} 2023-07-21 11:17:14,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:14,930 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:14,930 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:14,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b34b491f20b5b207a4739b04422e6972, NAME => 'Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 11:17:14,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:14,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:14,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:14,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:14,935 INFO [StoreOpener-b34b491f20b5b207a4739b04422e6972-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:14,936 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=27 2023-07-21 11:17:14,936 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=27, state=SUCCESS; OpenRegionProcedure ebcba8f23ad518b980bd222b45d4d348, server=jenkins-hbase17.apache.org,35009,1689938231406 in 232 msec 2023-07-21 11:17:14,938 DEBUG [StoreOpener-b34b491f20b5b207a4739b04422e6972-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972/f 2023-07-21 11:17:14,939 DEBUG [StoreOpener-b34b491f20b5b207a4739b04422e6972-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972/f 2023-07-21 11:17:14,939 INFO [StoreOpener-b34b491f20b5b207a4739b04422e6972-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b34b491f20b5b207a4739b04422e6972 columnFamilyName f 2023-07-21 11:17:14,940 INFO [StoreOpener-b34b491f20b5b207a4739b04422e6972-1] regionserver.HStore(310): Store=b34b491f20b5b207a4739b04422e6972/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:14,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:14,942 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebcba8f23ad518b980bd222b45d4d348, REOPEN/MOVE in 610 msec 2023-07-21 11:17:14,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:14,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:14,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened b34b491f20b5b207a4739b04422e6972; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10136080160, jitterRate=-0.05600397288799286}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:14,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for b34b491f20b5b207a4739b04422e6972: 2023-07-21 11:17:14,948 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972., pid=40, masterSystemTime=1689938234849 2023-07-21 11:17:14,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:14,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:14,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:14,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 447b76afd454e02f371ce70f46a3aec1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 11:17:14,950 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=b34b491f20b5b207a4739b04422e6972, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:14,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:14,951 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938234950"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938234950"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938234950"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938234950"}]},"ts":"1689938234950"} 2023-07-21 11:17:14,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:14,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:14,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:14,952 INFO [StoreOpener-447b76afd454e02f371ce70f46a3aec1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:14,955 DEBUG [StoreOpener-447b76afd454e02f371ce70f46a3aec1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1/f 2023-07-21 11:17:14,955 DEBUG [StoreOpener-447b76afd454e02f371ce70f46a3aec1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1/f 2023-07-21 11:17:14,956 INFO [StoreOpener-447b76afd454e02f371ce70f46a3aec1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 447b76afd454e02f371ce70f46a3aec1 columnFamilyName f 2023-07-21 11:17:14,956 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=26 2023-07-21 11:17:14,957 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=26, state=SUCCESS; OpenRegionProcedure b34b491f20b5b207a4739b04422e6972, server=jenkins-hbase17.apache.org,35009,1689938231406 in 249 msec 2023-07-21 11:17:14,957 INFO [StoreOpener-447b76afd454e02f371ce70f46a3aec1-1] regionserver.HStore(310): Store=447b76afd454e02f371ce70f46a3aec1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:14,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:14,959 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b34b491f20b5b207a4739b04422e6972, REOPEN/MOVE in 634 msec 2023-07-21 11:17:14,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:14,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:14,963 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 447b76afd454e02f371ce70f46a3aec1; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11446716640, jitterRate=0.06605856120586395}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:14,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 447b76afd454e02f371ce70f46a3aec1: 2023-07-21 11:17:14,964 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1., pid=37, masterSystemTime=1689938234849 2023-07-21 11:17:14,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:14,966 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:14,966 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:14,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 687be8526fbd580e2020048aab72f661, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 11:17:14,967 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=447b76afd454e02f371ce70f46a3aec1, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:14,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:14,967 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938234967"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938234967"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938234967"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938234967"}]},"ts":"1689938234967"} 2023-07-21 11:17:14,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:14,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:14,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:14,969 INFO [StoreOpener-687be8526fbd580e2020048aab72f661-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:14,972 DEBUG [StoreOpener-687be8526fbd580e2020048aab72f661-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661/f 2023-07-21 11:17:14,972 DEBUG [StoreOpener-687be8526fbd580e2020048aab72f661-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661/f 2023-07-21 11:17:14,973 INFO [StoreOpener-687be8526fbd580e2020048aab72f661-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 687be8526fbd580e2020048aab72f661 columnFamilyName f 2023-07-21 11:17:14,974 INFO [StoreOpener-687be8526fbd580e2020048aab72f661-1] regionserver.HStore(310): Store=687be8526fbd580e2020048aab72f661/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:14,974 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=30 2023-07-21 11:17:14,974 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=30, state=SUCCESS; OpenRegionProcedure 447b76afd454e02f371ce70f46a3aec1, server=jenkins-hbase17.apache.org,35009,1689938231406 in 272 msec 2023-07-21 11:17:14,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661 2023-07-21 11:17:14,977 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=447b76afd454e02f371ce70f46a3aec1, REOPEN/MOVE in 636 msec 2023-07-21 11:17:14,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661 2023-07-21 11:17:14,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:14,983 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 687be8526fbd580e2020048aab72f661; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9746637920, jitterRate=-0.09227360785007477}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:14,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 687be8526fbd580e2020048aab72f661: 2023-07-21 11:17:14,984 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661., pid=38, masterSystemTime=1689938234849 2023-07-21 11:17:14,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:14,986 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:14,986 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=687be8526fbd580e2020048aab72f661, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:14,987 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938234986"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938234986"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938234986"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938234986"}]},"ts":"1689938234986"} 2023-07-21 11:17:14,991 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=33 2023-07-21 11:17:14,991 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=33, state=SUCCESS; OpenRegionProcedure 687be8526fbd580e2020048aab72f661, server=jenkins-hbase17.apache.org,35009,1689938231406 in 290 msec 2023-07-21 11:17:14,993 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687be8526fbd580e2020048aab72f661, REOPEN/MOVE in 645 msec 2023-07-21 11:17:15,195 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 11:17:15,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-21 11:17:15,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_287573559. 2023-07-21 11:17:15,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:15,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:15,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:15,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:15,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:15,360 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:15,368 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:15,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:15,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:15,391 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938235391"}]},"ts":"1689938235391"} 2023-07-21 11:17:15,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-21 11:17:15,395 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-21 11:17:15,397 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-21 11:17:15,402 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b34b491f20b5b207a4739b04422e6972, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebcba8f23ad518b980bd222b45d4d348, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ad30f5a32ab3318e29f6bb37c63129b6, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=447b76afd454e02f371ce70f46a3aec1, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687be8526fbd580e2020048aab72f661, UNASSIGN}] 2023-07-21 11:17:15,405 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=447b76afd454e02f371ce70f46a3aec1, UNASSIGN 2023-07-21 11:17:15,405 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ad30f5a32ab3318e29f6bb37c63129b6, UNASSIGN 2023-07-21 11:17:15,406 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebcba8f23ad518b980bd222b45d4d348, UNASSIGN 2023-07-21 11:17:15,408 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687be8526fbd580e2020048aab72f661, UNASSIGN 2023-07-21 11:17:15,408 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=447b76afd454e02f371ce70f46a3aec1, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:15,408 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b34b491f20b5b207a4739b04422e6972, UNASSIGN 2023-07-21 11:17:15,408 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938235408"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938235408"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938235408"}]},"ts":"1689938235408"} 2023-07-21 11:17:15,409 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=ad30f5a32ab3318e29f6bb37c63129b6, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:15,409 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938235409"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938235409"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938235409"}]},"ts":"1689938235409"} 2023-07-21 11:17:15,410 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=ebcba8f23ad518b980bd222b45d4d348, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:15,410 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938235410"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938235410"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938235410"}]},"ts":"1689938235410"} 2023-07-21 11:17:15,413 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=45, state=RUNNABLE; CloseRegionProcedure 447b76afd454e02f371ce70f46a3aec1, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:15,413 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=b34b491f20b5b207a4739b04422e6972, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:15,414 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=687be8526fbd580e2020048aab72f661, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:15,414 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938235413"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938235413"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938235413"}]},"ts":"1689938235413"} 2023-07-21 11:17:15,415 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938235413"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938235413"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938235413"}]},"ts":"1689938235413"} 2023-07-21 11:17:15,415 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=44, state=RUNNABLE; CloseRegionProcedure ad30f5a32ab3318e29f6bb37c63129b6, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:15,430 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=43, state=RUNNABLE; CloseRegionProcedure ebcba8f23ad518b980bd222b45d4d348, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:15,432 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=42, state=RUNNABLE; CloseRegionProcedure b34b491f20b5b207a4739b04422e6972, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:15,444 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=46, state=RUNNABLE; CloseRegionProcedure 687be8526fbd580e2020048aab72f661, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:15,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-21 11:17:15,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:15,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 447b76afd454e02f371ce70f46a3aec1, disabling compactions & flushes 2023-07-21 11:17:15,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:15,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:15,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. after waiting 0 ms 2023-07-21 11:17:15,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:15,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 11:17:15,578 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1. 2023-07-21 11:17:15,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 447b76afd454e02f371ce70f46a3aec1: 2023-07-21 11:17:15,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:15,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:15,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing ad30f5a32ab3318e29f6bb37c63129b6, disabling compactions & flushes 2023-07-21 11:17:15,582 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:15,582 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=447b76afd454e02f371ce70f46a3aec1, regionState=CLOSED 2023-07-21 11:17:15,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:15,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. after waiting 0 ms 2023-07-21 11:17:15,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:15,582 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938235582"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938235582"}]},"ts":"1689938235582"} 2023-07-21 11:17:15,588 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=45 2023-07-21 11:17:15,589 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=45, state=SUCCESS; CloseRegionProcedure 447b76afd454e02f371ce70f46a3aec1, server=jenkins-hbase17.apache.org,35009,1689938231406 in 172 msec 2023-07-21 11:17:15,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 11:17:15,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6. 2023-07-21 11:17:15,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for ad30f5a32ab3318e29f6bb37c63129b6: 2023-07-21 11:17:15,598 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=447b76afd454e02f371ce70f46a3aec1, UNASSIGN in 189 msec 2023-07-21 11:17:15,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:15,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:15,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing ebcba8f23ad518b980bd222b45d4d348, disabling compactions & flushes 2023-07-21 11:17:15,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:15,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:15,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. after waiting 0 ms 2023-07-21 11:17:15,601 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=ad30f5a32ab3318e29f6bb37c63129b6, regionState=CLOSED 2023-07-21 11:17:15,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:15,601 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938235601"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938235601"}]},"ts":"1689938235601"} 2023-07-21 11:17:15,606 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=44 2023-07-21 11:17:15,606 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; CloseRegionProcedure ad30f5a32ab3318e29f6bb37c63129b6, server=jenkins-hbase17.apache.org,35009,1689938231406 in 188 msec 2023-07-21 11:17:15,607 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 11:17:15,608 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ad30f5a32ab3318e29f6bb37c63129b6, UNASSIGN in 204 msec 2023-07-21 11:17:15,609 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348. 2023-07-21 11:17:15,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for ebcba8f23ad518b980bd222b45d4d348: 2023-07-21 11:17:15,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:15,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:15,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing b34b491f20b5b207a4739b04422e6972, disabling compactions & flushes 2023-07-21 11:17:15,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:15,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:15,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. after waiting 0 ms 2023-07-21 11:17:15,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:15,616 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=ebcba8f23ad518b980bd222b45d4d348, regionState=CLOSED 2023-07-21 11:17:15,617 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938235616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938235616"}]},"ts":"1689938235616"} 2023-07-21 11:17:15,623 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=43 2023-07-21 11:17:15,623 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=43, state=SUCCESS; CloseRegionProcedure ebcba8f23ad518b980bd222b45d4d348, server=jenkins-hbase17.apache.org,35009,1689938231406 in 189 msec 2023-07-21 11:17:15,626 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebcba8f23ad518b980bd222b45d4d348, UNASSIGN in 221 msec 2023-07-21 11:17:15,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 11:17:15,633 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972. 2023-07-21 11:17:15,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for b34b491f20b5b207a4739b04422e6972: 2023-07-21 11:17:15,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:15,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:15,638 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=b34b491f20b5b207a4739b04422e6972, regionState=CLOSED 2023-07-21 11:17:15,638 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938235638"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938235638"}]},"ts":"1689938235638"} 2023-07-21 11:17:15,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 687be8526fbd580e2020048aab72f661, disabling compactions & flushes 2023-07-21 11:17:15,640 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:15,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:15,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. after waiting 0 ms 2023-07-21 11:17:15,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:15,656 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=42 2023-07-21 11:17:15,656 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=42, state=SUCCESS; CloseRegionProcedure b34b491f20b5b207a4739b04422e6972, server=jenkins-hbase17.apache.org,35009,1689938231406 in 214 msec 2023-07-21 11:17:15,659 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b34b491f20b5b207a4739b04422e6972, UNASSIGN in 254 msec 2023-07-21 11:17:15,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 11:17:15,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661. 2023-07-21 11:17:15,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 687be8526fbd580e2020048aab72f661: 2023-07-21 11:17:15,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-21 11:17:15,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 687be8526fbd580e2020048aab72f661 2023-07-21 11:17:15,707 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=687be8526fbd580e2020048aab72f661, regionState=CLOSED 2023-07-21 11:17:15,707 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938235707"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938235707"}]},"ts":"1689938235707"} 2023-07-21 11:17:15,725 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=46 2023-07-21 11:17:15,725 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=46, state=SUCCESS; CloseRegionProcedure 687be8526fbd580e2020048aab72f661, server=jenkins-hbase17.apache.org,35009,1689938231406 in 273 msec 2023-07-21 11:17:15,730 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=41 2023-07-21 11:17:15,730 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687be8526fbd580e2020048aab72f661, UNASSIGN in 323 msec 2023-07-21 11:17:15,732 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938235732"}]},"ts":"1689938235732"} 2023-07-21 11:17:15,734 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-21 11:17:15,736 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-21 11:17:15,741 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 361 msec 2023-07-21 11:17:16,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-21 11:17:16,002 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-21 11:17:16,003 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:16,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.HMaster$6(2260): Client=jenkins//136.243.18.41 truncate Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:16,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-21 11:17:16,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 11:17:16,020 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-21 11:17:16,036 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:16,036 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:16,036 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661 2023-07-21 11:17:16,036 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:16,036 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:16,042 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972/recovered.edits] 2023-07-21 11:17:16,042 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1/recovered.edits] 2023-07-21 11:17:16,042 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348/recovered.edits] 2023-07-21 11:17:16,042 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661/recovered.edits] 2023-07-21 11:17:16,043 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6/recovered.edits] 2023-07-21 11:17:16,064 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1/recovered.edits/7.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1/recovered.edits/7.seqid 2023-07-21 11:17:16,065 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348/recovered.edits/7.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348/recovered.edits/7.seqid 2023-07-21 11:17:16,067 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661/recovered.edits/7.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661/recovered.edits/7.seqid 2023-07-21 11:17:16,067 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/447b76afd454e02f371ce70f46a3aec1 2023-07-21 11:17:16,069 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6/recovered.edits/7.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6/recovered.edits/7.seqid 2023-07-21 11:17:16,069 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972/recovered.edits/7.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972/recovered.edits/7.seqid 2023-07-21 11:17:16,069 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebcba8f23ad518b980bd222b45d4d348 2023-07-21 11:17:16,070 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687be8526fbd580e2020048aab72f661 2023-07-21 11:17:16,070 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b34b491f20b5b207a4739b04422e6972 2023-07-21 11:17:16,070 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ad30f5a32ab3318e29f6bb37c63129b6 2023-07-21 11:17:16,070 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 11:17:16,096 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-21 11:17:16,101 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-21 11:17:16,102 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-21 11:17:16,102 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938236102"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:16,102 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938236102"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:16,102 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938236102"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:16,102 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938236102"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:16,102 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938236102"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:16,108 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 11:17:16,108 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b34b491f20b5b207a4739b04422e6972, NAME => 'Group_testTableMoveTruncateAndDrop,,1689938233053.b34b491f20b5b207a4739b04422e6972.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => ebcba8f23ad518b980bd222b45d4d348, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689938233053.ebcba8f23ad518b980bd222b45d4d348.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => ad30f5a32ab3318e29f6bb37c63129b6, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938233053.ad30f5a32ab3318e29f6bb37c63129b6.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 447b76afd454e02f371ce70f46a3aec1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938233053.447b76afd454e02f371ce70f46a3aec1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 687be8526fbd580e2020048aab72f661, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689938233053.687be8526fbd580e2020048aab72f661.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 11:17:16,108 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-21 11:17:16,108 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938236108"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:16,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 11:17:16,133 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-21 11:17:16,163 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72 2023-07-21 11:17:16,163 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4 2023-07-21 11:17:16,163 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c 2023-07-21 11:17:16,164 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf 2023-07-21 11:17:16,164 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd 2023-07-21 11:17:16,164 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72 empty. 2023-07-21 11:17:16,165 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd empty. 2023-07-21 11:17:16,166 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4 empty. 2023-07-21 11:17:16,166 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72 2023-07-21 11:17:16,166 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf empty. 2023-07-21 11:17:16,166 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c empty. 2023-07-21 11:17:16,166 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4 2023-07-21 11:17:16,166 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd 2023-07-21 11:17:16,167 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c 2023-07-21 11:17:16,167 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf 2023-07-21 11:17:16,167 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 11:17:16,216 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:16,218 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6a790ab5e1f584b002df603d22315e72, NAME => 'Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:16,225 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 69e0a2376cc039a7b43998c06443681c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:16,225 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 0c977df4e904be836935ec514fdb2bc4, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:16,324 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:16,324 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 69e0a2376cc039a7b43998c06443681c, disabling compactions & flushes 2023-07-21 11:17:16,324 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c. 2023-07-21 11:17:16,324 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c. 2023-07-21 11:17:16,324 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c. after waiting 0 ms 2023-07-21 11:17:16,324 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c. 2023-07-21 11:17:16,324 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c. 2023-07-21 11:17:16,325 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 69e0a2376cc039a7b43998c06443681c: 2023-07-21 11:17:16,325 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => ab7fb6dff0c62c6b1f329521be3c61bf, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:16,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 11:17:16,344 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:16,344 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 6a790ab5e1f584b002df603d22315e72, disabling compactions & flushes 2023-07-21 11:17:16,344 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72. 2023-07-21 11:17:16,345 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72. 2023-07-21 11:17:16,345 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72. after waiting 0 ms 2023-07-21 11:17:16,345 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72. 2023-07-21 11:17:16,345 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72. 2023-07-21 11:17:16,345 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 6a790ab5e1f584b002df603d22315e72: 2023-07-21 11:17:16,345 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9ae65828b5bec4ad50f58197472e13dd, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:16,365 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:16,365 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 0c977df4e904be836935ec514fdb2bc4, disabling compactions & flushes 2023-07-21 11:17:16,365 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4. 2023-07-21 11:17:16,365 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4. 2023-07-21 11:17:16,365 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4. after waiting 0 ms 2023-07-21 11:17:16,365 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4. 2023-07-21 11:17:16,365 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4. 2023-07-21 11:17:16,365 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 0c977df4e904be836935ec514fdb2bc4: 2023-07-21 11:17:16,373 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:16,373 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing ab7fb6dff0c62c6b1f329521be3c61bf, disabling compactions & flushes 2023-07-21 11:17:16,374 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf. 2023-07-21 11:17:16,374 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf. 2023-07-21 11:17:16,374 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf. after waiting 0 ms 2023-07-21 11:17:16,374 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf. 2023-07-21 11:17:16,374 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf. 2023-07-21 11:17:16,374 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for ab7fb6dff0c62c6b1f329521be3c61bf: 2023-07-21 11:17:16,379 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:16,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 9ae65828b5bec4ad50f58197472e13dd, disabling compactions & flushes 2023-07-21 11:17:16,380 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd. 2023-07-21 11:17:16,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd. 2023-07-21 11:17:16,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd. after waiting 0 ms 2023-07-21 11:17:16,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd. 2023-07-21 11:17:16,380 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd. 2023-07-21 11:17:16,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 9ae65828b5bec4ad50f58197472e13dd: 2023-07-21 11:17:16,384 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938236384"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938236384"}]},"ts":"1689938236384"} 2023-07-21 11:17:16,384 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938236384"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938236384"}]},"ts":"1689938236384"} 2023-07-21 11:17:16,384 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938236384"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938236384"}]},"ts":"1689938236384"} 2023-07-21 11:17:16,384 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938236384"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938236384"}]},"ts":"1689938236384"} 2023-07-21 11:17:16,384 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938236384"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938236384"}]},"ts":"1689938236384"} 2023-07-21 11:17:16,388 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 11:17:16,390 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938236389"}]},"ts":"1689938236389"} 2023-07-21 11:17:16,392 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-21 11:17:16,396 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:16,396 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:16,396 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:16,396 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:16,400 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6a790ab5e1f584b002df603d22315e72, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0c977df4e904be836935ec514fdb2bc4, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69e0a2376cc039a7b43998c06443681c, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab7fb6dff0c62c6b1f329521be3c61bf, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ae65828b5bec4ad50f58197472e13dd, ASSIGN}] 2023-07-21 11:17:16,402 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6a790ab5e1f584b002df603d22315e72, ASSIGN 2023-07-21 11:17:16,402 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0c977df4e904be836935ec514fdb2bc4, ASSIGN 2023-07-21 11:17:16,403 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69e0a2376cc039a7b43998c06443681c, ASSIGN 2023-07-21 11:17:16,403 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab7fb6dff0c62c6b1f329521be3c61bf, ASSIGN 2023-07-21 11:17:16,403 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ae65828b5bec4ad50f58197472e13dd, ASSIGN 2023-07-21 11:17:16,404 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0c977df4e904be836935ec514fdb2bc4, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,35009,1689938231406; forceNewPlan=false, retain=false 2023-07-21 11:17:16,405 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69e0a2376cc039a7b43998c06443681c, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,35009,1689938231406; forceNewPlan=false, retain=false 2023-07-21 11:17:16,404 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6a790ab5e1f584b002df603d22315e72, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,33011,1689938225358; forceNewPlan=false, retain=false 2023-07-21 11:17:16,407 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab7fb6dff0c62c6b1f329521be3c61bf, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,33011,1689938225358; forceNewPlan=false, retain=false 2023-07-21 11:17:16,407 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ae65828b5bec4ad50f58197472e13dd, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,35009,1689938231406; forceNewPlan=false, retain=false 2023-07-21 11:17:16,557 INFO [jenkins-hbase17:40703] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 11:17:16,561 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=69e0a2376cc039a7b43998c06443681c, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:16,561 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=0c977df4e904be836935ec514fdb2bc4, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:16,562 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938236561"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938236561"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938236561"}]},"ts":"1689938236561"} 2023-07-21 11:17:16,561 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=ab7fb6dff0c62c6b1f329521be3c61bf, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:16,562 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938236561"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938236561"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938236561"}]},"ts":"1689938236561"} 2023-07-21 11:17:16,561 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=9ae65828b5bec4ad50f58197472e13dd, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:16,563 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938236561"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938236561"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938236561"}]},"ts":"1689938236561"} 2023-07-21 11:17:16,562 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938236561"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938236561"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938236561"}]},"ts":"1689938236561"} 2023-07-21 11:17:16,562 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=6a790ab5e1f584b002df603d22315e72, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:16,563 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938236562"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938236562"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938236562"}]},"ts":"1689938236562"} 2023-07-21 11:17:16,565 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=54, state=RUNNABLE; OpenRegionProcedure 0c977df4e904be836935ec514fdb2bc4, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:16,569 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=57, state=RUNNABLE; OpenRegionProcedure 9ae65828b5bec4ad50f58197472e13dd, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:16,571 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=55, state=RUNNABLE; OpenRegionProcedure 69e0a2376cc039a7b43998c06443681c, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:16,573 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=53, state=RUNNABLE; OpenRegionProcedure 6a790ab5e1f584b002df603d22315e72, server=jenkins-hbase17.apache.org,33011,1689938225358}] 2023-07-21 11:17:16,574 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=56, state=RUNNABLE; OpenRegionProcedure ab7fb6dff0c62c6b1f329521be3c61bf, server=jenkins-hbase17.apache.org,33011,1689938225358}] 2023-07-21 11:17:16,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 11:17:16,727 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd. 2023-07-21 11:17:16,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9ae65828b5bec4ad50f58197472e13dd, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 11:17:16,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9ae65828b5bec4ad50f58197472e13dd 2023-07-21 11:17:16,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:16,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 9ae65828b5bec4ad50f58197472e13dd 2023-07-21 11:17:16,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 9ae65828b5bec4ad50f58197472e13dd 2023-07-21 11:17:16,730 INFO [StoreOpener-9ae65828b5bec4ad50f58197472e13dd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9ae65828b5bec4ad50f58197472e13dd 2023-07-21 11:17:16,734 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72. 2023-07-21 11:17:16,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6a790ab5e1f584b002df603d22315e72, NAME => 'Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 11:17:16,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6a790ab5e1f584b002df603d22315e72 2023-07-21 11:17:16,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:16,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 6a790ab5e1f584b002df603d22315e72 2023-07-21 11:17:16,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 6a790ab5e1f584b002df603d22315e72 2023-07-21 11:17:16,735 DEBUG [StoreOpener-9ae65828b5bec4ad50f58197472e13dd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd/f 2023-07-21 11:17:16,736 DEBUG [StoreOpener-9ae65828b5bec4ad50f58197472e13dd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd/f 2023-07-21 11:17:16,736 INFO [StoreOpener-9ae65828b5bec4ad50f58197472e13dd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9ae65828b5bec4ad50f58197472e13dd columnFamilyName f 2023-07-21 11:17:16,737 INFO [StoreOpener-9ae65828b5bec4ad50f58197472e13dd-1] regionserver.HStore(310): Store=9ae65828b5bec4ad50f58197472e13dd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:16,738 INFO [StoreOpener-6a790ab5e1f584b002df603d22315e72-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6a790ab5e1f584b002df603d22315e72 2023-07-21 11:17:16,738 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd 2023-07-21 11:17:16,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd 2023-07-21 11:17:16,740 DEBUG [StoreOpener-6a790ab5e1f584b002df603d22315e72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72/f 2023-07-21 11:17:16,740 DEBUG [StoreOpener-6a790ab5e1f584b002df603d22315e72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72/f 2023-07-21 11:17:16,741 INFO [StoreOpener-6a790ab5e1f584b002df603d22315e72-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6a790ab5e1f584b002df603d22315e72 columnFamilyName f 2023-07-21 11:17:16,743 INFO [StoreOpener-6a790ab5e1f584b002df603d22315e72-1] regionserver.HStore(310): Store=6a790ab5e1f584b002df603d22315e72/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:16,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 9ae65828b5bec4ad50f58197472e13dd 2023-07-21 11:17:16,749 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72 2023-07-21 11:17:16,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72 2023-07-21 11:17:16,754 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 6a790ab5e1f584b002df603d22315e72 2023-07-21 11:17:16,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:16,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:16,765 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 9ae65828b5bec4ad50f58197472e13dd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11684444640, jitterRate=0.0881987065076828}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:16,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 9ae65828b5bec4ad50f58197472e13dd: 2023-07-21 11:17:16,766 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 6a790ab5e1f584b002df603d22315e72; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9942636160, jitterRate=-0.07401984930038452}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:16,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 6a790ab5e1f584b002df603d22315e72: 2023-07-21 11:17:16,767 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72., pid=61, masterSystemTime=1689938236726 2023-07-21 11:17:16,768 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd., pid=59, masterSystemTime=1689938236720 2023-07-21 11:17:16,769 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72. 2023-07-21 11:17:16,770 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72. 2023-07-21 11:17:16,770 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf. 2023-07-21 11:17:16,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ab7fb6dff0c62c6b1f329521be3c61bf, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 11:17:16,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ab7fb6dff0c62c6b1f329521be3c61bf 2023-07-21 11:17:16,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:16,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for ab7fb6dff0c62c6b1f329521be3c61bf 2023-07-21 11:17:16,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for ab7fb6dff0c62c6b1f329521be3c61bf 2023-07-21 11:17:16,772 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=6a790ab5e1f584b002df603d22315e72, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:16,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd. 2023-07-21 11:17:16,772 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938236772"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938236772"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938236772"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938236772"}]},"ts":"1689938236772"} 2023-07-21 11:17:16,772 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd. 2023-07-21 11:17:16,772 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4. 2023-07-21 11:17:16,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0c977df4e904be836935ec514fdb2bc4, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 11:17:16,773 INFO [StoreOpener-ab7fb6dff0c62c6b1f329521be3c61bf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ab7fb6dff0c62c6b1f329521be3c61bf 2023-07-21 11:17:16,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0c977df4e904be836935ec514fdb2bc4 2023-07-21 11:17:16,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:16,775 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=9ae65828b5bec4ad50f58197472e13dd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:16,775 DEBUG [StoreOpener-ab7fb6dff0c62c6b1f329521be3c61bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf/f 2023-07-21 11:17:16,776 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 0c977df4e904be836935ec514fdb2bc4 2023-07-21 11:17:16,776 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938236775"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938236775"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938236775"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938236775"}]},"ts":"1689938236775"} 2023-07-21 11:17:16,776 DEBUG [StoreOpener-ab7fb6dff0c62c6b1f329521be3c61bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf/f 2023-07-21 11:17:16,776 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 0c977df4e904be836935ec514fdb2bc4 2023-07-21 11:17:16,777 INFO [StoreOpener-ab7fb6dff0c62c6b1f329521be3c61bf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ab7fb6dff0c62c6b1f329521be3c61bf columnFamilyName f 2023-07-21 11:17:16,783 INFO [StoreOpener-ab7fb6dff0c62c6b1f329521be3c61bf-1] regionserver.HStore(310): Store=ab7fb6dff0c62c6b1f329521be3c61bf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:16,785 INFO [StoreOpener-0c977df4e904be836935ec514fdb2bc4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0c977df4e904be836935ec514fdb2bc4 2023-07-21 11:17:16,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf 2023-07-21 11:17:16,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf 2023-07-21 11:17:16,789 DEBUG [StoreOpener-0c977df4e904be836935ec514fdb2bc4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4/f 2023-07-21 11:17:16,789 DEBUG [StoreOpener-0c977df4e904be836935ec514fdb2bc4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4/f 2023-07-21 11:17:16,790 INFO [StoreOpener-0c977df4e904be836935ec514fdb2bc4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0c977df4e904be836935ec514fdb2bc4 columnFamilyName f 2023-07-21 11:17:16,793 INFO [StoreOpener-0c977df4e904be836935ec514fdb2bc4-1] regionserver.HStore(310): Store=0c977df4e904be836935ec514fdb2bc4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:16,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for ab7fb6dff0c62c6b1f329521be3c61bf 2023-07-21 11:17:16,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4 2023-07-21 11:17:16,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4 2023-07-21 11:17:16,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:16,803 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened ab7fb6dff0c62c6b1f329521be3c61bf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10251613760, jitterRate=-0.045244067907333374}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:16,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 0c977df4e904be836935ec514fdb2bc4 2023-07-21 11:17:16,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for ab7fb6dff0c62c6b1f329521be3c61bf: 2023-07-21 11:17:16,805 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=53 2023-07-21 11:17:16,805 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=53, state=SUCCESS; OpenRegionProcedure 6a790ab5e1f584b002df603d22315e72, server=jenkins-hbase17.apache.org,33011,1689938225358 in 206 msec 2023-07-21 11:17:16,808 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf., pid=62, masterSystemTime=1689938236726 2023-07-21 11:17:16,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:16,812 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 0c977df4e904be836935ec514fdb2bc4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11118112320, jitterRate=0.035454899072647095}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:16,812 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 0c977df4e904be836935ec514fdb2bc4: 2023-07-21 11:17:16,816 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4., pid=58, masterSystemTime=1689938236720 2023-07-21 11:17:16,820 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6a790ab5e1f584b002df603d22315e72, ASSIGN in 408 msec 2023-07-21 11:17:16,821 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=57 2023-07-21 11:17:16,821 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=57, state=SUCCESS; OpenRegionProcedure 9ae65828b5bec4ad50f58197472e13dd, server=jenkins-hbase17.apache.org,35009,1689938231406 in 215 msec 2023-07-21 11:17:16,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf. 2023-07-21 11:17:16,821 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf. 2023-07-21 11:17:16,825 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=ab7fb6dff0c62c6b1f329521be3c61bf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:16,825 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938236825"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938236825"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938236825"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938236825"}]},"ts":"1689938236825"} 2023-07-21 11:17:16,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4. 2023-07-21 11:17:16,826 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4. 2023-07-21 11:17:16,827 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c. 2023-07-21 11:17:16,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 69e0a2376cc039a7b43998c06443681c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 11:17:16,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 69e0a2376cc039a7b43998c06443681c 2023-07-21 11:17:16,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:16,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 69e0a2376cc039a7b43998c06443681c 2023-07-21 11:17:16,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 69e0a2376cc039a7b43998c06443681c 2023-07-21 11:17:16,829 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ae65828b5bec4ad50f58197472e13dd, ASSIGN in 421 msec 2023-07-21 11:17:16,831 INFO [StoreOpener-69e0a2376cc039a7b43998c06443681c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 69e0a2376cc039a7b43998c06443681c 2023-07-21 11:17:16,834 DEBUG [StoreOpener-69e0a2376cc039a7b43998c06443681c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c/f 2023-07-21 11:17:16,834 DEBUG [StoreOpener-69e0a2376cc039a7b43998c06443681c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c/f 2023-07-21 11:17:16,834 INFO [StoreOpener-69e0a2376cc039a7b43998c06443681c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 69e0a2376cc039a7b43998c06443681c columnFamilyName f 2023-07-21 11:17:16,835 INFO [StoreOpener-69e0a2376cc039a7b43998c06443681c-1] regionserver.HStore(310): Store=69e0a2376cc039a7b43998c06443681c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:16,836 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=0c977df4e904be836935ec514fdb2bc4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:16,837 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938236836"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938236836"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938236836"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938236836"}]},"ts":"1689938236836"} 2023-07-21 11:17:16,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c 2023-07-21 11:17:16,838 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c 2023-07-21 11:17:16,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 69e0a2376cc039a7b43998c06443681c 2023-07-21 11:17:16,844 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=56 2023-07-21 11:17:16,844 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=56, state=SUCCESS; OpenRegionProcedure ab7fb6dff0c62c6b1f329521be3c61bf, server=jenkins-hbase17.apache.org,33011,1689938225358 in 256 msec 2023-07-21 11:17:16,847 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab7fb6dff0c62c6b1f329521be3c61bf, ASSIGN in 444 msec 2023-07-21 11:17:16,848 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=54 2023-07-21 11:17:16,848 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=54, state=SUCCESS; OpenRegionProcedure 0c977df4e904be836935ec514fdb2bc4, server=jenkins-hbase17.apache.org,35009,1689938231406 in 277 msec 2023-07-21 11:17:16,851 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:16,851 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0c977df4e904be836935ec514fdb2bc4, ASSIGN in 448 msec 2023-07-21 11:17:16,852 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 69e0a2376cc039a7b43998c06443681c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9503403840, jitterRate=-0.1149265468120575}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:16,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 69e0a2376cc039a7b43998c06443681c: 2023-07-21 11:17:16,853 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c., pid=60, masterSystemTime=1689938236720 2023-07-21 11:17:16,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c. 2023-07-21 11:17:16,856 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c. 2023-07-21 11:17:16,857 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=69e0a2376cc039a7b43998c06443681c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:16,858 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938236857"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938236857"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938236857"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938236857"}]},"ts":"1689938236857"} 2023-07-21 11:17:16,867 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=55 2023-07-21 11:17:16,867 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; OpenRegionProcedure 69e0a2376cc039a7b43998c06443681c, server=jenkins-hbase17.apache.org,35009,1689938231406 in 290 msec 2023-07-21 11:17:16,875 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=52 2023-07-21 11:17:16,875 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69e0a2376cc039a7b43998c06443681c, ASSIGN in 467 msec 2023-07-21 11:17:16,876 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938236875"}]},"ts":"1689938236875"} 2023-07-21 11:17:16,890 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-21 11:17:16,892 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-21 11:17:16,896 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 883 msec 2023-07-21 11:17:17,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 11:17:17,132 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-21 11:17:17,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:17,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:17,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:17,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:17,135 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:17,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:17,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:17,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-21 11:17:17,141 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938237141"}]},"ts":"1689938237141"} 2023-07-21 11:17:17,143 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-21 11:17:17,144 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-21 11:17:17,145 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6a790ab5e1f584b002df603d22315e72, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0c977df4e904be836935ec514fdb2bc4, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69e0a2376cc039a7b43998c06443681c, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab7fb6dff0c62c6b1f329521be3c61bf, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ae65828b5bec4ad50f58197472e13dd, UNASSIGN}] 2023-07-21 11:17:17,148 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0c977df4e904be836935ec514fdb2bc4, UNASSIGN 2023-07-21 11:17:17,148 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6a790ab5e1f584b002df603d22315e72, UNASSIGN 2023-07-21 11:17:17,148 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69e0a2376cc039a7b43998c06443681c, UNASSIGN 2023-07-21 11:17:17,148 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ae65828b5bec4ad50f58197472e13dd, UNASSIGN 2023-07-21 11:17:17,148 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab7fb6dff0c62c6b1f329521be3c61bf, UNASSIGN 2023-07-21 11:17:17,150 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=0c977df4e904be836935ec514fdb2bc4, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:17,150 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=69e0a2376cc039a7b43998c06443681c, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:17,150 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=6a790ab5e1f584b002df603d22315e72, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:17,150 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938237150"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938237150"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938237150"}]},"ts":"1689938237150"} 2023-07-21 11:17:17,150 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938237150"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938237150"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938237150"}]},"ts":"1689938237150"} 2023-07-21 11:17:17,150 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938237150"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938237150"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938237150"}]},"ts":"1689938237150"} 2023-07-21 11:17:17,150 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=9ae65828b5bec4ad50f58197472e13dd, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:17,150 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=ab7fb6dff0c62c6b1f329521be3c61bf, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:17,151 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938237150"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938237150"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938237150"}]},"ts":"1689938237150"} 2023-07-21 11:17:17,151 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938237150"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938237150"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938237150"}]},"ts":"1689938237150"} 2023-07-21 11:17:17,152 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=66, state=RUNNABLE; CloseRegionProcedure 69e0a2376cc039a7b43998c06443681c, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:17,153 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=64, state=RUNNABLE; CloseRegionProcedure 6a790ab5e1f584b002df603d22315e72, server=jenkins-hbase17.apache.org,33011,1689938225358}] 2023-07-21 11:17:17,156 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=65, state=RUNNABLE; CloseRegionProcedure 0c977df4e904be836935ec514fdb2bc4, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:17,157 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=68, state=RUNNABLE; CloseRegionProcedure 9ae65828b5bec4ad50f58197472e13dd, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:17,159 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=67, state=RUNNABLE; CloseRegionProcedure ab7fb6dff0c62c6b1f329521be3c61bf, server=jenkins-hbase17.apache.org,33011,1689938225358}] 2023-07-21 11:17:17,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-21 11:17:17,309 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 69e0a2376cc039a7b43998c06443681c 2023-07-21 11:17:17,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 69e0a2376cc039a7b43998c06443681c, disabling compactions & flushes 2023-07-21 11:17:17,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c. 2023-07-21 11:17:17,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c. 2023-07-21 11:17:17,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c. after waiting 0 ms 2023-07-21 11:17:17,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c. 2023-07-21 11:17:17,311 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 6a790ab5e1f584b002df603d22315e72 2023-07-21 11:17:17,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 6a790ab5e1f584b002df603d22315e72, disabling compactions & flushes 2023-07-21 11:17:17,312 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72. 2023-07-21 11:17:17,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72. 2023-07-21 11:17:17,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72. after waiting 0 ms 2023-07-21 11:17:17,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72. 2023-07-21 11:17:17,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:17,317 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c. 2023-07-21 11:17:17,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 69e0a2376cc039a7b43998c06443681c: 2023-07-21 11:17:17,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:17,319 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72. 2023-07-21 11:17:17,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 6a790ab5e1f584b002df603d22315e72: 2023-07-21 11:17:17,320 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 69e0a2376cc039a7b43998c06443681c 2023-07-21 11:17:17,320 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 0c977df4e904be836935ec514fdb2bc4 2023-07-21 11:17:17,320 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=69e0a2376cc039a7b43998c06443681c, regionState=CLOSED 2023-07-21 11:17:17,321 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 0c977df4e904be836935ec514fdb2bc4, disabling compactions & flushes 2023-07-21 11:17:17,321 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4. 2023-07-21 11:17:17,321 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938237320"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938237320"}]},"ts":"1689938237320"} 2023-07-21 11:17:17,321 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4. 2023-07-21 11:17:17,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4. after waiting 0 ms 2023-07-21 11:17:17,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4. 2023-07-21 11:17:17,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 6a790ab5e1f584b002df603d22315e72 2023-07-21 11:17:17,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close ab7fb6dff0c62c6b1f329521be3c61bf 2023-07-21 11:17:17,324 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing ab7fb6dff0c62c6b1f329521be3c61bf, disabling compactions & flushes 2023-07-21 11:17:17,324 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf. 2023-07-21 11:17:17,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf. 2023-07-21 11:17:17,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf. after waiting 0 ms 2023-07-21 11:17:17,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf. 2023-07-21 11:17:17,325 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=6a790ab5e1f584b002df603d22315e72, regionState=CLOSED 2023-07-21 11:17:17,325 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938237325"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938237325"}]},"ts":"1689938237325"} 2023-07-21 11:17:17,329 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=66 2023-07-21 11:17:17,330 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; CloseRegionProcedure 69e0a2376cc039a7b43998c06443681c, server=jenkins-hbase17.apache.org,35009,1689938231406 in 173 msec 2023-07-21 11:17:17,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:17,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:17,335 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4. 2023-07-21 11:17:17,335 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf. 2023-07-21 11:17:17,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 0c977df4e904be836935ec514fdb2bc4: 2023-07-21 11:17:17,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for ab7fb6dff0c62c6b1f329521be3c61bf: 2023-07-21 11:17:17,337 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69e0a2376cc039a7b43998c06443681c, UNASSIGN in 185 msec 2023-07-21 11:17:17,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 0c977df4e904be836935ec514fdb2bc4 2023-07-21 11:17:17,342 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 9ae65828b5bec4ad50f58197472e13dd 2023-07-21 11:17:17,343 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 9ae65828b5bec4ad50f58197472e13dd, disabling compactions & flushes 2023-07-21 11:17:17,343 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=64 2023-07-21 11:17:17,343 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd. 2023-07-21 11:17:17,343 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=64, state=SUCCESS; CloseRegionProcedure 6a790ab5e1f584b002df603d22315e72, server=jenkins-hbase17.apache.org,33011,1689938225358 in 179 msec 2023-07-21 11:17:17,343 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=0c977df4e904be836935ec514fdb2bc4, regionState=CLOSED 2023-07-21 11:17:17,343 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938237343"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938237343"}]},"ts":"1689938237343"} 2023-07-21 11:17:17,343 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd. 2023-07-21 11:17:17,344 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd. after waiting 0 ms 2023-07-21 11:17:17,344 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd. 2023-07-21 11:17:17,345 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed ab7fb6dff0c62c6b1f329521be3c61bf 2023-07-21 11:17:17,347 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=ab7fb6dff0c62c6b1f329521be3c61bf, regionState=CLOSED 2023-07-21 11:17:17,348 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689938237347"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938237347"}]},"ts":"1689938237347"} 2023-07-21 11:17:17,352 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6a790ab5e1f584b002df603d22315e72, UNASSIGN in 198 msec 2023-07-21 11:17:17,355 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:17,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd. 2023-07-21 11:17:17,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 9ae65828b5bec4ad50f58197472e13dd: 2023-07-21 11:17:17,367 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=65 2023-07-21 11:17:17,367 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=65, state=SUCCESS; CloseRegionProcedure 0c977df4e904be836935ec514fdb2bc4, server=jenkins-hbase17.apache.org,35009,1689938231406 in 192 msec 2023-07-21 11:17:17,368 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 9ae65828b5bec4ad50f58197472e13dd 2023-07-21 11:17:17,369 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=9ae65828b5bec4ad50f58197472e13dd, regionState=CLOSED 2023-07-21 11:17:17,373 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=67 2023-07-21 11:17:17,374 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=67, state=SUCCESS; CloseRegionProcedure ab7fb6dff0c62c6b1f329521be3c61bf, server=jenkins-hbase17.apache.org,33011,1689938225358 in 198 msec 2023-07-21 11:17:17,374 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689938237368"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938237368"}]},"ts":"1689938237368"} 2023-07-21 11:17:17,376 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0c977df4e904be836935ec514fdb2bc4, UNASSIGN in 222 msec 2023-07-21 11:17:17,378 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab7fb6dff0c62c6b1f329521be3c61bf, UNASSIGN in 229 msec 2023-07-21 11:17:17,381 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=68 2023-07-21 11:17:17,381 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=68, state=SUCCESS; CloseRegionProcedure 9ae65828b5bec4ad50f58197472e13dd, server=jenkins-hbase17.apache.org,35009,1689938231406 in 221 msec 2023-07-21 11:17:17,387 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=63 2023-07-21 11:17:17,387 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ae65828b5bec4ad50f58197472e13dd, UNASSIGN in 236 msec 2023-07-21 11:17:17,390 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938237390"}]},"ts":"1689938237390"} 2023-07-21 11:17:17,393 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-21 11:17:17,394 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-21 11:17:17,398 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 259 msec 2023-07-21 11:17:17,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-21 11:17:17,444 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-21 11:17:17,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:17,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:17,463 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:17,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_287573559' 2023-07-21 11:17:17,465 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:17,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:17,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:17,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:17,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:17,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-21 11:17:17,488 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72 2023-07-21 11:17:17,488 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd 2023-07-21 11:17:17,488 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf 2023-07-21 11:17:17,488 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c 2023-07-21 11:17:17,488 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4 2023-07-21 11:17:17,493 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd/recovered.edits] 2023-07-21 11:17:17,495 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf/recovered.edits] 2023-07-21 11:17:17,495 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72/recovered.edits] 2023-07-21 11:17:17,495 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4/recovered.edits] 2023-07-21 11:17:17,497 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c/recovered.edits] 2023-07-21 11:17:17,529 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd/recovered.edits/4.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd/recovered.edits/4.seqid 2023-07-21 11:17:17,530 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4/recovered.edits/4.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4/recovered.edits/4.seqid 2023-07-21 11:17:17,531 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf/recovered.edits/4.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf/recovered.edits/4.seqid 2023-07-21 11:17:17,532 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0c977df4e904be836935ec514fdb2bc4 2023-07-21 11:17:17,533 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ae65828b5bec4ad50f58197472e13dd 2023-07-21 11:17:17,536 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab7fb6dff0c62c6b1f329521be3c61bf 2023-07-21 11:17:17,540 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c/recovered.edits/4.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c/recovered.edits/4.seqid 2023-07-21 11:17:17,541 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72/recovered.edits/4.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72/recovered.edits/4.seqid 2023-07-21 11:17:17,542 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69e0a2376cc039a7b43998c06443681c 2023-07-21 11:17:17,543 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6a790ab5e1f584b002df603d22315e72 2023-07-21 11:17:17,543 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 11:17:17,550 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:17,559 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-21 11:17:17,569 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-21 11:17:17,571 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:17,571 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-21 11:17:17,571 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938237571"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:17,571 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938237571"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:17,572 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938237571"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:17,572 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938237571"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:17,572 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938237571"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:17,574 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 11:17:17,574 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6a790ab5e1f584b002df603d22315e72, NAME => 'Group_testTableMoveTruncateAndDrop,,1689938236073.6a790ab5e1f584b002df603d22315e72.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 0c977df4e904be836935ec514fdb2bc4, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689938236073.0c977df4e904be836935ec514fdb2bc4.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 69e0a2376cc039a7b43998c06443681c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689938236073.69e0a2376cc039a7b43998c06443681c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => ab7fb6dff0c62c6b1f329521be3c61bf, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689938236073.ab7fb6dff0c62c6b1f329521be3c61bf.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 9ae65828b5bec4ad50f58197472e13dd, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689938236073.9ae65828b5bec4ad50f58197472e13dd.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 11:17:17,575 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-21 11:17:17,575 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938237575"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:17,577 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-21 11:17:17,579 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 11:17:17,581 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 127 msec 2023-07-21 11:17:17,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-21 11:17:17,587 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-21 11:17:17,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:17,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:17,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:17,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:17,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:17,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:17,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:17,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009] to rsgroup default 2023-07-21 11:17:17,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:17,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:17,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:17,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:17,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_287573559, current retry=0 2023-07-21 11:17:17,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33011,1689938225358, jenkins-hbase17.apache.org,35009,1689938231406] are moved back to Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:17,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_287573559 => default 2023-07-21 11:17:17,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:17,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_testTableMoveTruncateAndDrop_287573559 2023-07-21 11:17:17,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:17,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:17,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 11:17:17,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:17,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:17,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:17,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:17,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:17,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:17,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:17,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:17,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:17,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:17,655 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:17,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:17,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:17,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:17,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:17,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:17,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:17,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:17,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:17,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:17,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 148 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939437693, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:17,696 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:17,701 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:17,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:17,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:17,703 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:17,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:17,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:17,750 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=496 (was 420) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63555@0x09c2e566-SendThread(127.0.0.1:63555) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp374221529-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-47789194-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-629465801_17 at /127.0.0.1:55648 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp374221529-637-acceptor-0@11899519-ServerConnector@12b776cf{HTTP/1.1, (http/1.1)}{0.0.0.0:40483} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2083248521_17 at /127.0.0.1:54176 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp374221529-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1799127051) connection to localhost.localdomain/127.0.0.1:38415 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp374221529-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp374221529-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase17:35009-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63555@0x09c2e566-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63555@0x09c2e566 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1521805986.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1067718477_17 at /127.0.0.1:33978 [Receiving block BP-1027894687-136.243.18.41-1689938218944:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1027894687-136.243.18.41-1689938218944:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:35009Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6-prefix:jenkins-hbase17.apache.org,35009,1689938231406 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase17:35009 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1067718477_17 at /127.0.0.1:54070 [Receiving block BP-1027894687-136.243.18.41-1689938218944:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:38415 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp374221529-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp374221529-636 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/688699424.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1027894687-136.243.18.41-1689938218944:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp374221529-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1067718477_17 at /127.0.0.1:33994 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1027894687-136.243.18.41-1689938218944:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1067718477_17 at /127.0.0.1:55614 [Receiving block BP-1027894687-136.243.18.41-1689938218944:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=786 (was 671) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=662 (was 668), ProcessCount=186 (was 186), AvailableMemoryMB=2000 (was 2361) 2023-07-21 11:17:17,783 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=496, OpenFileDescriptor=786, MaxFileDescriptor=60000, SystemLoadAverage=662, ProcessCount=186, AvailableMemoryMB=1998 2023-07-21 11:17:17,784 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-21 11:17:17,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:17,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:17,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:17,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:17,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:17,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:17,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:17,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:17,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:17,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:17,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:17,826 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:17,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:17,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:17,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:17,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:17,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:17,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:17,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:17,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:17,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:17,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 176 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939437851, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:17,852 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:17,854 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:17,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:17,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:17,856 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:17,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:17,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:17,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup foo* 2023-07-21 11:17:17,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:17,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 136.243.18.41:42872 deadline: 1689939437858, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 11:17:17,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup foo@ 2023-07-21 11:17:17,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:17,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 83 connection: 136.243.18.41:42872 deadline: 1689939437860, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 11:17:17,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup - 2023-07-21 11:17:17,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:17,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 186 service: MasterService methodName: ExecMasterService size: 80 connection: 136.243.18.41:42872 deadline: 1689939437861, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 11:17:17,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup foo_123 2023-07-21 11:17:17,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-21 11:17:17,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:17,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:17,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:17,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:17,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:17,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:17,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:17,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:17,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:17,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:17,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:17,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:17,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:17,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup foo_123 2023-07-21 11:17:17,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:17,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:17,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 11:17:17,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:17,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:17,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:17,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:17,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:17,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:17,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:17,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:17,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:17,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:17,923 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:17,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:17,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:17,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:17,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:17,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:17,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:17,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:17,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:17,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:17,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 220 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939437941, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:17,942 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:17,944 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:17,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:17,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:17,946 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:17,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:17,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:17,969 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=498 (was 496) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=784 (was 786), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=662 (was 662), ProcessCount=186 (was 186), AvailableMemoryMB=1994 (was 1998) 2023-07-21 11:17:18,000 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=498, OpenFileDescriptor=784, MaxFileDescriptor=60000, SystemLoadAverage=662, ProcessCount=186, AvailableMemoryMB=1993 2023-07-21 11:17:18,002 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-21 11:17:18,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:18,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:18,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:18,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:18,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:18,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:18,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:18,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:18,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:18,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:18,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:18,029 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:18,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:18,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:18,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:18,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:18,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:18,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:18,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:18,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:18,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:18,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 248 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939438058, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:18,059 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:18,060 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:18,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:18,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:18,062 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:18,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:18,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:18,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:18,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:18,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:18,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:18,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup bar 2023-07-21 11:17:18,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:18,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 11:17:18,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:18,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:18,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:18,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:18,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:18,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009] to rsgroup bar 2023-07-21 11:17:18,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:18,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 11:17:18,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:18,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:18,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(238): Moving server region 0d251b6fcd6df4af958f1fccdfdc34e4, which do not belong to RSGroup bar 2023-07-21 11:17:18,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0d251b6fcd6df4af958f1fccdfdc34e4, REOPEN/MOVE 2023-07-21 11:17:18,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 11:17:18,106 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0d251b6fcd6df4af958f1fccdfdc34e4, REOPEN/MOVE 2023-07-21 11:17:18,107 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=0d251b6fcd6df4af958f1fccdfdc34e4, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:18,107 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938238107"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938238107"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938238107"}]},"ts":"1689938238107"} 2023-07-21 11:17:18,109 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE; CloseRegionProcedure 0d251b6fcd6df4af958f1fccdfdc34e4, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:18,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:18,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 0d251b6fcd6df4af958f1fccdfdc34e4, disabling compactions & flushes 2023-07-21 11:17:18,265 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:18,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:18,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. after waiting 0 ms 2023-07-21 11:17:18,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:18,265 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 0d251b6fcd6df4af958f1fccdfdc34e4 1/1 column families, dataSize=5.05 KB heapSize=8.49 KB 2023-07-21 11:17:18,345 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.05 KB at sequenceid=32 (bloomFilter=true), to=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/.tmp/m/0e15854ea0ee4c21ace7b6f2f214572e 2023-07-21 11:17:18,357 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0e15854ea0ee4c21ace7b6f2f214572e 2023-07-21 11:17:18,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/.tmp/m/0e15854ea0ee4c21ace7b6f2f214572e as hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m/0e15854ea0ee4c21ace7b6f2f214572e 2023-07-21 11:17:18,371 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0e15854ea0ee4c21ace7b6f2f214572e 2023-07-21 11:17:18,371 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m/0e15854ea0ee4c21ace7b6f2f214572e, entries=9, sequenceid=32, filesize=5.5 K 2023-07-21 11:17:18,376 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.05 KB/5168, heapSize ~8.48 KB/8680, currentSize=0 B/0 for 0d251b6fcd6df4af958f1fccdfdc34e4 in 111ms, sequenceid=32, compaction requested=false 2023-07-21 11:17:18,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/recovered.edits/35.seqid, newMaxSeqId=35, maxSeqId=12 2023-07-21 11:17:18,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:17:18,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:18,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 0d251b6fcd6df4af958f1fccdfdc34e4: 2023-07-21 11:17:18,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 0d251b6fcd6df4af958f1fccdfdc34e4 move to jenkins-hbase17.apache.org,46255,1689938224878 record at close sequenceid=32 2023-07-21 11:17:18,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:18,394 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=0d251b6fcd6df4af958f1fccdfdc34e4, regionState=CLOSED 2023-07-21 11:17:18,394 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938238394"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938238394"}]},"ts":"1689938238394"} 2023-07-21 11:17:18,399 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-21 11:17:18,399 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; CloseRegionProcedure 0d251b6fcd6df4af958f1fccdfdc34e4, server=jenkins-hbase17.apache.org,36863,1689938225106 in 288 msec 2023-07-21 11:17:18,400 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0d251b6fcd6df4af958f1fccdfdc34e4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,46255,1689938224878; forceNewPlan=false, retain=false 2023-07-21 11:17:18,551 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=0d251b6fcd6df4af958f1fccdfdc34e4, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:18,551 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938238551"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938238551"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938238551"}]},"ts":"1689938238551"} 2023-07-21 11:17:18,557 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; OpenRegionProcedure 0d251b6fcd6df4af958f1fccdfdc34e4, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:18,716 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:18,717 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0d251b6fcd6df4af958f1fccdfdc34e4, NAME => 'hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:18,717 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:17:18,717 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. service=MultiRowMutationService 2023-07-21 11:17:18,717 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 11:17:18,717 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:18,717 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:18,717 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:18,717 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:18,719 INFO [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:18,721 DEBUG [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m 2023-07-21 11:17:18,721 DEBUG [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m 2023-07-21 11:17:18,722 INFO [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0d251b6fcd6df4af958f1fccdfdc34e4 columnFamilyName m 2023-07-21 11:17:18,731 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0e15854ea0ee4c21ace7b6f2f214572e 2023-07-21 11:17:18,731 DEBUG [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m/0e15854ea0ee4c21ace7b6f2f214572e 2023-07-21 11:17:18,745 DEBUG [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m/18080b713b7b468e9af86a18ffb475be 2023-07-21 11:17:18,746 INFO [StoreOpener-0d251b6fcd6df4af958f1fccdfdc34e4-1] regionserver.HStore(310): Store=0d251b6fcd6df4af958f1fccdfdc34e4/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:18,747 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:18,750 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:18,768 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:18,770 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 0d251b6fcd6df4af958f1fccdfdc34e4; next sequenceid=36; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@41b74fa8, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:18,770 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 0d251b6fcd6df4af958f1fccdfdc34e4: 2023-07-21 11:17:18,777 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4., pid=77, masterSystemTime=1689938238712 2023-07-21 11:17:18,783 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:18,783 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:18,790 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=0d251b6fcd6df4af958f1fccdfdc34e4, regionState=OPEN, openSeqNum=36, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:18,790 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938238790"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938238790"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938238790"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938238790"}]},"ts":"1689938238790"} 2023-07-21 11:17:18,802 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-21 11:17:18,802 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; OpenRegionProcedure 0d251b6fcd6df4af958f1fccdfdc34e4, server=jenkins-hbase17.apache.org,46255,1689938224878 in 239 msec 2023-07-21 11:17:18,813 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0d251b6fcd6df4af958f1fccdfdc34e4, REOPEN/MOVE in 699 msec 2023-07-21 11:17:19,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-21 11:17:19,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33011,1689938225358, jenkins-hbase17.apache.org,35009,1689938231406, jenkins-hbase17.apache.org,36863,1689938225106] are moved back to default 2023-07-21 11:17:19,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-21 11:17:19,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:19,107 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36863] ipc.CallRunner(144): callId: 13 service: ClientService methodName: Scan size: 136 connection: 136.243.18.41:57446 deadline: 1689938299107, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=46255 startCode=1689938224878. As of locationSeqNum=32. 2023-07-21 11:17:19,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:19,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:19,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=bar 2023-07-21 11:17:19,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:19,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:19,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-21 11:17:19,235 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:19,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 78 2023-07-21 11:17:19,236 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36863] ipc.CallRunner(144): callId: 186 service: ClientService methodName: ExecService size: 532 connection: 136.243.18.41:57452 deadline: 1689938299236, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=46255 startCode=1689938224878. As of locationSeqNum=32. 2023-07-21 11:17:19,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-21 11:17:19,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-21 11:17:19,341 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:19,342 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 11:17:19,343 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:19,344 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:19,346 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:19,348 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:19,349 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56 empty. 2023-07-21 11:17:19,349 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:19,349 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-21 11:17:19,396 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:19,397 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => ec273dee91413e46d5abd6d3db493e56, NAME => 'Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:19,417 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:19,417 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing ec273dee91413e46d5abd6d3db493e56, disabling compactions & flushes 2023-07-21 11:17:19,417 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:19,417 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:19,417 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. after waiting 0 ms 2023-07-21 11:17:19,417 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:19,417 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:19,417 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for ec273dee91413e46d5abd6d3db493e56: 2023-07-21 11:17:19,421 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:19,422 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938239422"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938239422"}]},"ts":"1689938239422"} 2023-07-21 11:17:19,424 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:19,425 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:19,425 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938239425"}]},"ts":"1689938239425"} 2023-07-21 11:17:19,427 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-21 11:17:19,429 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, ASSIGN}] 2023-07-21 11:17:19,431 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, ASSIGN 2023-07-21 11:17:19,431 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46255,1689938224878; forceNewPlan=false, retain=false 2023-07-21 11:17:19,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-21 11:17:19,583 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=ec273dee91413e46d5abd6d3db493e56, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:19,583 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938239583"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938239583"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938239583"}]},"ts":"1689938239583"} 2023-07-21 11:17:19,584 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=79, state=RUNNABLE; OpenRegionProcedure ec273dee91413e46d5abd6d3db493e56, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:19,741 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:19,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ec273dee91413e46d5abd6d3db493e56, NAME => 'Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:19,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:19,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:19,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:19,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:19,744 INFO [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:19,746 DEBUG [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/f 2023-07-21 11:17:19,747 DEBUG [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/f 2023-07-21 11:17:19,747 INFO [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ec273dee91413e46d5abd6d3db493e56 columnFamilyName f 2023-07-21 11:17:19,749 INFO [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] regionserver.HStore(310): Store=ec273dee91413e46d5abd6d3db493e56/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:19,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:19,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:19,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:19,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:19,765 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened ec273dee91413e46d5abd6d3db493e56; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9769117440, jitterRate=-0.09018003940582275}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:19,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for ec273dee91413e46d5abd6d3db493e56: 2023-07-21 11:17:19,772 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56., pid=80, masterSystemTime=1689938239736 2023-07-21 11:17:19,774 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:19,775 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:19,775 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=ec273dee91413e46d5abd6d3db493e56, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:19,775 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938239775"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938239775"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938239775"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938239775"}]},"ts":"1689938239775"} 2023-07-21 11:17:19,779 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=79 2023-07-21 11:17:19,779 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=79, state=SUCCESS; OpenRegionProcedure ec273dee91413e46d5abd6d3db493e56, server=jenkins-hbase17.apache.org,46255,1689938224878 in 193 msec 2023-07-21 11:17:19,783 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-21 11:17:19,783 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, ASSIGN in 350 msec 2023-07-21 11:17:19,784 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:19,785 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938239784"}]},"ts":"1689938239784"} 2023-07-21 11:17:19,786 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-21 11:17:19,789 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:19,791 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 561 msec 2023-07-21 11:17:19,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-21 11:17:19,846 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 78 completed 2023-07-21 11:17:19,846 DEBUG [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-21 11:17:19,846 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:19,853 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-21 11:17:19,853 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:19,854 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-21 11:17:19,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-21 11:17:19,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:19,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 11:17:19,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:19,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:19,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-21 11:17:19,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(345): Moving region ec273dee91413e46d5abd6d3db493e56 to RSGroup bar 2023-07-21 11:17:19,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:19,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:19,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:19,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:19,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 11:17:19,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:19,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, REOPEN/MOVE 2023-07-21 11:17:19,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-21 11:17:19,884 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, REOPEN/MOVE 2023-07-21 11:17:19,886 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=ec273dee91413e46d5abd6d3db493e56, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:19,886 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938239886"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938239886"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938239886"}]},"ts":"1689938239886"} 2023-07-21 11:17:19,888 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure ec273dee91413e46d5abd6d3db493e56, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:20,025 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 11:17:20,046 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:20,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing ec273dee91413e46d5abd6d3db493e56, disabling compactions & flushes 2023-07-21 11:17:20,047 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:20,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:20,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. after waiting 0 ms 2023-07-21 11:17:20,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:20,060 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:20,062 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:20,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for ec273dee91413e46d5abd6d3db493e56: 2023-07-21 11:17:20,062 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding ec273dee91413e46d5abd6d3db493e56 move to jenkins-hbase17.apache.org,35009,1689938231406 record at close sequenceid=2 2023-07-21 11:17:20,065 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:20,069 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=ec273dee91413e46d5abd6d3db493e56, regionState=CLOSED 2023-07-21 11:17:20,069 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938240068"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938240068"}]},"ts":"1689938240068"} 2023-07-21 11:17:20,076 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-21 11:17:20,076 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure ec273dee91413e46d5abd6d3db493e56, server=jenkins-hbase17.apache.org,46255,1689938224878 in 184 msec 2023-07-21 11:17:20,077 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,35009,1689938231406; forceNewPlan=false, retain=false 2023-07-21 11:17:20,228 INFO [jenkins-hbase17:40703] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:20,228 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=ec273dee91413e46d5abd6d3db493e56, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:20,228 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938240228"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938240228"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938240228"}]},"ts":"1689938240228"} 2023-07-21 11:17:20,230 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure ec273dee91413e46d5abd6d3db493e56, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:20,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:20,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ec273dee91413e46d5abd6d3db493e56, NAME => 'Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:20,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:20,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:20,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:20,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:20,388 INFO [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:20,389 DEBUG [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/f 2023-07-21 11:17:20,389 DEBUG [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/f 2023-07-21 11:17:20,389 INFO [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ec273dee91413e46d5abd6d3db493e56 columnFamilyName f 2023-07-21 11:17:20,390 INFO [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] regionserver.HStore(310): Store=ec273dee91413e46d5abd6d3db493e56/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:20,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:20,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:20,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:20,405 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened ec273dee91413e46d5abd6d3db493e56; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10071821440, jitterRate=-0.06198853254318237}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:20,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for ec273dee91413e46d5abd6d3db493e56: 2023-07-21 11:17:20,406 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56., pid=83, masterSystemTime=1689938240382 2023-07-21 11:17:20,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:20,411 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:20,411 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=ec273dee91413e46d5abd6d3db493e56, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:20,411 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938240411"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938240411"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938240411"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938240411"}]},"ts":"1689938240411"} 2023-07-21 11:17:20,416 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-21 11:17:20,416 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure ec273dee91413e46d5abd6d3db493e56, server=jenkins-hbase17.apache.org,35009,1689938231406 in 183 msec 2023-07-21 11:17:20,419 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, REOPEN/MOVE in 535 msec 2023-07-21 11:17:20,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-21 11:17:20,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-21 11:17:20,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:20,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:20,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:20,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=bar 2023-07-21 11:17:20,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:20,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup bar 2023-07-21 11:17:20,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:20,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 85 connection: 136.243.18.41:42872 deadline: 1689939440896, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-21 11:17:20,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009] to rsgroup default 2023-07-21 11:17:20,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:20,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 288 service: MasterService methodName: ExecMasterService size: 191 connection: 136.243.18.41:42872 deadline: 1689939440899, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-21 11:17:20,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-21 11:17:20,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:20,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 11:17:20,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:20,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:20,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-21 11:17:20,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(345): Moving region ec273dee91413e46d5abd6d3db493e56 to RSGroup default 2023-07-21 11:17:20,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, REOPEN/MOVE 2023-07-21 11:17:20,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 11:17:20,911 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, REOPEN/MOVE 2023-07-21 11:17:20,912 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ec273dee91413e46d5abd6d3db493e56, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:20,912 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938240912"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938240912"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938240912"}]},"ts":"1689938240912"} 2023-07-21 11:17:20,914 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure ec273dee91413e46d5abd6d3db493e56, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:21,069 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:21,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing ec273dee91413e46d5abd6d3db493e56, disabling compactions & flushes 2023-07-21 11:17:21,070 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:21,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:21,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. after waiting 0 ms 2023-07-21 11:17:21,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:21,075 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 11:17:21,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:21,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for ec273dee91413e46d5abd6d3db493e56: 2023-07-21 11:17:21,077 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding ec273dee91413e46d5abd6d3db493e56 move to jenkins-hbase17.apache.org,46255,1689938224878 record at close sequenceid=5 2023-07-21 11:17:21,081 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:21,082 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ec273dee91413e46d5abd6d3db493e56, regionState=CLOSED 2023-07-21 11:17:21,083 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938241082"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938241082"}]},"ts":"1689938241082"} 2023-07-21 11:17:21,092 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-21 11:17:21,093 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure ec273dee91413e46d5abd6d3db493e56, server=jenkins-hbase17.apache.org,35009,1689938231406 in 174 msec 2023-07-21 11:17:21,097 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,46255,1689938224878; forceNewPlan=false, retain=false 2023-07-21 11:17:21,248 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ec273dee91413e46d5abd6d3db493e56, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:21,248 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938241248"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938241248"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938241248"}]},"ts":"1689938241248"} 2023-07-21 11:17:21,252 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure ec273dee91413e46d5abd6d3db493e56, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:21,408 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:21,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ec273dee91413e46d5abd6d3db493e56, NAME => 'Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:21,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:21,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:21,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:21,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:21,410 INFO [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:21,412 DEBUG [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/f 2023-07-21 11:17:21,412 DEBUG [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/f 2023-07-21 11:17:21,413 INFO [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ec273dee91413e46d5abd6d3db493e56 columnFamilyName f 2023-07-21 11:17:21,413 INFO [StoreOpener-ec273dee91413e46d5abd6d3db493e56-1] regionserver.HStore(310): Store=ec273dee91413e46d5abd6d3db493e56/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:21,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:21,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:21,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:21,429 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened ec273dee91413e46d5abd6d3db493e56; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12043200960, jitterRate=0.12161049246788025}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:21,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for ec273dee91413e46d5abd6d3db493e56: 2023-07-21 11:17:21,430 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56., pid=86, masterSystemTime=1689938241404 2023-07-21 11:17:21,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:21,433 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:21,433 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ec273dee91413e46d5abd6d3db493e56, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:21,433 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938241433"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938241433"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938241433"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938241433"}]},"ts":"1689938241433"} 2023-07-21 11:17:21,443 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-21 11:17:21,443 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure ec273dee91413e46d5abd6d3db493e56, server=jenkins-hbase17.apache.org,46255,1689938224878 in 184 msec 2023-07-21 11:17:21,445 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, REOPEN/MOVE in 535 msec 2023-07-21 11:17:21,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-21 11:17:21,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-21 11:17:21,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:21,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:21,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:21,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup bar 2023-07-21 11:17:21,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:21,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 295 service: MasterService methodName: ExecMasterService size: 85 connection: 136.243.18.41:42872 deadline: 1689939441917, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-21 11:17:21,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009] to rsgroup default 2023-07-21 11:17:21,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:21,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 11:17:21,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:21,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:21,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-21 11:17:21,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33011,1689938225358, jenkins-hbase17.apache.org,35009,1689938231406, jenkins-hbase17.apache.org,36863,1689938225106] are moved back to bar 2023-07-21 11:17:21,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-21 11:17:21,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:21,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:21,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:21,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup bar 2023-07-21 11:17:21,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:21,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:21,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 11:17:21,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:21,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:21,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:21,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:21,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:21,951 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-21 11:17:21,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testFailRemoveGroup 2023-07-21 11:17:21,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-21 11:17:21,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-21 11:17:21,957 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938241957"}]},"ts":"1689938241957"} 2023-07-21 11:17:21,959 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-21 11:17:21,960 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-21 11:17:21,962 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, UNASSIGN}] 2023-07-21 11:17:21,964 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, UNASSIGN 2023-07-21 11:17:21,965 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=ec273dee91413e46d5abd6d3db493e56, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:21,965 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938241965"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938241965"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938241965"}]},"ts":"1689938241965"} 2023-07-21 11:17:21,969 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE; CloseRegionProcedure ec273dee91413e46d5abd6d3db493e56, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:22,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-21 11:17:22,121 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:22,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing ec273dee91413e46d5abd6d3db493e56, disabling compactions & flushes 2023-07-21 11:17:22,125 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:22,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:22,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. after waiting 0 ms 2023-07-21 11:17:22,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:22,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 11:17:22,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56. 2023-07-21 11:17:22,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for ec273dee91413e46d5abd6d3db493e56: 2023-07-21 11:17:22,132 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:22,132 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=ec273dee91413e46d5abd6d3db493e56, regionState=CLOSED 2023-07-21 11:17:22,132 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938242132"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938242132"}]},"ts":"1689938242132"} 2023-07-21 11:17:22,135 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-21 11:17:22,135 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; CloseRegionProcedure ec273dee91413e46d5abd6d3db493e56, server=jenkins-hbase17.apache.org,46255,1689938224878 in 167 msec 2023-07-21 11:17:22,137 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-21 11:17:22,137 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec273dee91413e46d5abd6d3db493e56, UNASSIGN in 173 msec 2023-07-21 11:17:22,137 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938242137"}]},"ts":"1689938242137"} 2023-07-21 11:17:22,139 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-21 11:17:22,140 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-21 11:17:22,142 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 189 msec 2023-07-21 11:17:22,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-21 11:17:22,260 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-21 11:17:22,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testFailRemoveGroup 2023-07-21 11:17:22,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 11:17:22,266 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 11:17:22,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-21 11:17:22,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:22,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:22,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:22,272 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=90, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 11:17:22,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-21 11:17:22,283 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:22,285 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/recovered.edits] 2023-07-21 11:17:22,293 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/recovered.edits/10.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56/recovered.edits/10.seqid 2023-07-21 11:17:22,295 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testFailRemoveGroup/ec273dee91413e46d5abd6d3db493e56 2023-07-21 11:17:22,295 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-21 11:17:22,300 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=90, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 11:17:22,305 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-21 11:17:22,308 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-21 11:17:22,329 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=90, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 11:17:22,329 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-21 11:17:22,329 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938242329"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:22,335 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 11:17:22,335 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ec273dee91413e46d5abd6d3db493e56, NAME => 'Group_testFailRemoveGroup,,1689938239228.ec273dee91413e46d5abd6d3db493e56.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 11:17:22,335 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-21 11:17:22,335 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938242335"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:22,345 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-21 11:17:22,353 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=90, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 11:17:22,360 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 91 msec 2023-07-21 11:17:22,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-21 11:17:22,376 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-21 11:17:22,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:22,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:22,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:22,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:22,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:22,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:22,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:22,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:22,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:22,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:22,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:22,401 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:22,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:22,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:22,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:22,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:22,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:22,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:22,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:22,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:22,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:22,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 343 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939442445, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:22,446 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:22,450 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:22,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:22,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:22,452 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:22,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:22,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:22,490 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=506 (was 498) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2067296882_17 at /127.0.0.1:54176 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-629465801_17 at /127.0.0.1:50494 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2067296882_17 at /127.0.0.1:50488 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1067718477_17 at /127.0.0.1:53470 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x595febc4-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-629465801_17 at /127.0.0.1:33994 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=789 (was 784) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=689 (was 662) - SystemLoadAverage LEAK? -, ProcessCount=186 (was 186), AvailableMemoryMB=1740 (was 1993) 2023-07-21 11:17:22,490 WARN [Listener at localhost.localdomain/38409] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-21 11:17:22,541 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=506, OpenFileDescriptor=789, MaxFileDescriptor=60000, SystemLoadAverage=689, ProcessCount=186, AvailableMemoryMB=1730 2023-07-21 11:17:22,541 WARN [Listener at localhost.localdomain/38409] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-21 11:17:22,541 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-21 11:17:22,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:22,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:22,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:22,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:22,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:22,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:22,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:22,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:22,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:22,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:22,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:22,562 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:22,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:22,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:22,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:22,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:22,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:22,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:22,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:22,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:22,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:22,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 371 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939442585, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:22,586 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:22,590 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:22,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:22,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:22,592 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:22,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:22,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:22,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:22,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:22,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup Group_testMultiTableMove_1105779021 2023-07-21 11:17:22,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1105779021 2023-07-21 11:17:22,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:22,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:22,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:22,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:22,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:22,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:22,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33011] to rsgroup Group_testMultiTableMove_1105779021 2023-07-21 11:17:22,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:22,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1105779021 2023-07-21 11:17:22,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:22,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:22,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 11:17:22,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33011,1689938225358] are moved back to default 2023-07-21 11:17:22,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1105779021 2023-07-21 11:17:22,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:22,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:22,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:22,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1105779021 2023-07-21 11:17:22,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:22,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:22,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 11:17:22,659 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:22,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 91 2023-07-21 11:17:22,663 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:22,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-21 11:17:22,664 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1105779021 2023-07-21 11:17:22,665 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:22,668 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:22,670 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:22,672 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:22,673 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791 empty. 2023-07-21 11:17:22,674 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:22,674 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-21 11:17:22,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-21 11:17:22,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-21 11:17:23,142 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:23,144 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5f80f4a8041231d3b2cf5ca364ce6791, NAME => 'GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:23,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-21 11:17:23,565 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:23,565 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 5f80f4a8041231d3b2cf5ca364ce6791, disabling compactions & flushes 2023-07-21 11:17:23,565 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:23,565 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:23,565 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. after waiting 0 ms 2023-07-21 11:17:23,565 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:23,565 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:23,565 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 5f80f4a8041231d3b2cf5ca364ce6791: 2023-07-21 11:17:23,569 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:23,570 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938243570"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938243570"}]},"ts":"1689938243570"} 2023-07-21 11:17:23,572 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:23,573 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:23,574 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938243573"}]},"ts":"1689938243573"} 2023-07-21 11:17:23,576 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-21 11:17:23,580 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:23,580 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:23,580 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:23,580 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:23,580 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:23,580 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5f80f4a8041231d3b2cf5ca364ce6791, ASSIGN}] 2023-07-21 11:17:23,584 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5f80f4a8041231d3b2cf5ca364ce6791, ASSIGN 2023-07-21 11:17:23,587 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5f80f4a8041231d3b2cf5ca364ce6791, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46255,1689938224878; forceNewPlan=false, retain=false 2023-07-21 11:17:23,738 INFO [jenkins-hbase17:40703] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:23,739 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=5f80f4a8041231d3b2cf5ca364ce6791, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:23,739 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938243739"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938243739"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938243739"}]},"ts":"1689938243739"} 2023-07-21 11:17:23,742 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; OpenRegionProcedure 5f80f4a8041231d3b2cf5ca364ce6791, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:23,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-21 11:17:23,898 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:23,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5f80f4a8041231d3b2cf5ca364ce6791, NAME => 'GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:23,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:23,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:23,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:23,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:23,905 INFO [StoreOpener-5f80f4a8041231d3b2cf5ca364ce6791-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:23,907 DEBUG [StoreOpener-5f80f4a8041231d3b2cf5ca364ce6791-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791/f 2023-07-21 11:17:23,907 DEBUG [StoreOpener-5f80f4a8041231d3b2cf5ca364ce6791-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791/f 2023-07-21 11:17:23,907 INFO [StoreOpener-5f80f4a8041231d3b2cf5ca364ce6791-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5f80f4a8041231d3b2cf5ca364ce6791 columnFamilyName f 2023-07-21 11:17:23,908 INFO [StoreOpener-5f80f4a8041231d3b2cf5ca364ce6791-1] regionserver.HStore(310): Store=5f80f4a8041231d3b2cf5ca364ce6791/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:23,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:23,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:23,912 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:23,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:23,914 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 5f80f4a8041231d3b2cf5ca364ce6791; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11344081760, jitterRate=0.056499943137168884}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:23,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 5f80f4a8041231d3b2cf5ca364ce6791: 2023-07-21 11:17:23,915 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791., pid=93, masterSystemTime=1689938243894 2023-07-21 11:17:23,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:23,916 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:23,916 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=5f80f4a8041231d3b2cf5ca364ce6791, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:23,916 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938243916"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938243916"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938243916"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938243916"}]},"ts":"1689938243916"} 2023-07-21 11:17:23,920 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-21 11:17:23,920 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; OpenRegionProcedure 5f80f4a8041231d3b2cf5ca364ce6791, server=jenkins-hbase17.apache.org,46255,1689938224878 in 176 msec 2023-07-21 11:17:23,921 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-21 11:17:23,921 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5f80f4a8041231d3b2cf5ca364ce6791, ASSIGN in 340 msec 2023-07-21 11:17:23,922 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:23,922 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938243922"}]},"ts":"1689938243922"} 2023-07-21 11:17:23,924 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-21 11:17:23,926 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:23,932 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 1.2740 sec 2023-07-21 11:17:24,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-21 11:17:24,770 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 91 completed 2023-07-21 11:17:24,770 DEBUG [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-21 11:17:24,771 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:24,775 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-21 11:17:24,775 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:24,775 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-21 11:17:24,778 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:24,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 11:17:24,781 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:24,781 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 94 2023-07-21 11:17:24,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 11:17:24,784 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:24,785 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1105779021 2023-07-21 11:17:24,785 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:24,785 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:24,788 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:24,790 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:24,790 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c empty. 2023-07-21 11:17:24,791 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:24,791 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-21 11:17:24,804 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:24,805 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => af356d35b1676a8268d1151965bc707c, NAME => 'GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:24,817 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:24,817 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing af356d35b1676a8268d1151965bc707c, disabling compactions & flushes 2023-07-21 11:17:24,817 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:24,817 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:24,817 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. after waiting 0 ms 2023-07-21 11:17:24,817 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:24,817 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:24,817 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for af356d35b1676a8268d1151965bc707c: 2023-07-21 11:17:24,820 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:24,821 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938244821"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938244821"}]},"ts":"1689938244821"} 2023-07-21 11:17:24,822 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:24,823 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:24,823 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938244823"}]},"ts":"1689938244823"} 2023-07-21 11:17:24,825 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-21 11:17:24,828 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:24,829 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:24,829 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:24,829 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:24,829 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:24,829 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=af356d35b1676a8268d1151965bc707c, ASSIGN}] 2023-07-21 11:17:24,831 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=af356d35b1676a8268d1151965bc707c, ASSIGN 2023-07-21 11:17:24,832 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=af356d35b1676a8268d1151965bc707c, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,35009,1689938231406; forceNewPlan=false, retain=false 2023-07-21 11:17:24,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 11:17:24,983 INFO [jenkins-hbase17:40703] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:24,984 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=af356d35b1676a8268d1151965bc707c, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:24,984 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938244984"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938244984"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938244984"}]},"ts":"1689938244984"} 2023-07-21 11:17:24,986 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure af356d35b1676a8268d1151965bc707c, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:25,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 11:17:25,145 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:25,145 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => af356d35b1676a8268d1151965bc707c, NAME => 'GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:25,145 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:25,145 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:25,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:25,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:25,148 INFO [StoreOpener-af356d35b1676a8268d1151965bc707c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:25,150 DEBUG [StoreOpener-af356d35b1676a8268d1151965bc707c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c/f 2023-07-21 11:17:25,150 DEBUG [StoreOpener-af356d35b1676a8268d1151965bc707c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c/f 2023-07-21 11:17:25,151 INFO [StoreOpener-af356d35b1676a8268d1151965bc707c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region af356d35b1676a8268d1151965bc707c columnFamilyName f 2023-07-21 11:17:25,151 INFO [StoreOpener-af356d35b1676a8268d1151965bc707c-1] regionserver.HStore(310): Store=af356d35b1676a8268d1151965bc707c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:25,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:25,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:25,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:25,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:25,161 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened af356d35b1676a8268d1151965bc707c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9923477920, jitterRate=-0.07580409944057465}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:25,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for af356d35b1676a8268d1151965bc707c: 2023-07-21 11:17:25,162 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c., pid=96, masterSystemTime=1689938245138 2023-07-21 11:17:25,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:25,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:25,164 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=af356d35b1676a8268d1151965bc707c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:25,165 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938245164"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938245164"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938245164"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938245164"}]},"ts":"1689938245164"} 2023-07-21 11:17:25,176 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-21 11:17:25,177 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure af356d35b1676a8268d1151965bc707c, server=jenkins-hbase17.apache.org,35009,1689938231406 in 180 msec 2023-07-21 11:17:25,178 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-21 11:17:25,178 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=af356d35b1676a8268d1151965bc707c, ASSIGN in 348 msec 2023-07-21 11:17:25,180 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:25,180 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938245180"}]},"ts":"1689938245180"} 2023-07-21 11:17:25,182 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-21 11:17:25,184 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:25,185 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 406 msec 2023-07-21 11:17:25,244 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 11:17:25,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 11:17:25,388 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 94 completed 2023-07-21 11:17:25,388 DEBUG [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-21 11:17:25,388 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:25,393 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-21 11:17:25,393 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:25,393 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-21 11:17:25,394 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:25,405 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-21 11:17:25,405 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:25,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-21 11:17:25,407 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:25,407 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1105779021 2023-07-21 11:17:25,410 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1105779021 2023-07-21 11:17:25,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:25,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1105779021 2023-07-21 11:17:25,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:25,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:25,414 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1105779021 2023-07-21 11:17:25,414 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(345): Moving region af356d35b1676a8268d1151965bc707c to RSGroup Group_testMultiTableMove_1105779021 2023-07-21 11:17:25,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=af356d35b1676a8268d1151965bc707c, REOPEN/MOVE 2023-07-21 11:17:25,415 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1105779021 2023-07-21 11:17:25,417 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(345): Moving region 5f80f4a8041231d3b2cf5ca364ce6791 to RSGroup Group_testMultiTableMove_1105779021 2023-07-21 11:17:25,417 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=af356d35b1676a8268d1151965bc707c, REOPEN/MOVE 2023-07-21 11:17:25,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5f80f4a8041231d3b2cf5ca364ce6791, REOPEN/MOVE 2023-07-21 11:17:25,418 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=af356d35b1676a8268d1151965bc707c, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:25,418 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1105779021, current retry=0 2023-07-21 11:17:25,419 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5f80f4a8041231d3b2cf5ca364ce6791, REOPEN/MOVE 2023-07-21 11:17:25,419 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938245418"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938245418"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938245418"}]},"ts":"1689938245418"} 2023-07-21 11:17:25,420 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=5f80f4a8041231d3b2cf5ca364ce6791, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:25,420 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938245420"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938245420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938245420"}]},"ts":"1689938245420"} 2023-07-21 11:17:25,421 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=97, state=RUNNABLE; CloseRegionProcedure af356d35b1676a8268d1151965bc707c, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:25,423 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=100, ppid=98, state=RUNNABLE; CloseRegionProcedure 5f80f4a8041231d3b2cf5ca364ce6791, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:25,575 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:25,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing af356d35b1676a8268d1151965bc707c, disabling compactions & flushes 2023-07-21 11:17:25,576 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:25,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:25,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. after waiting 0 ms 2023-07-21 11:17:25,577 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:25,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:25,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 5f80f4a8041231d3b2cf5ca364ce6791, disabling compactions & flushes 2023-07-21 11:17:25,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:25,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:25,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. after waiting 0 ms 2023-07-21 11:17:25,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:25,606 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:25,606 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:25,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:25,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:25,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for af356d35b1676a8268d1151965bc707c: 2023-07-21 11:17:25,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 5f80f4a8041231d3b2cf5ca364ce6791: 2023-07-21 11:17:25,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding af356d35b1676a8268d1151965bc707c move to jenkins-hbase17.apache.org,33011,1689938225358 record at close sequenceid=2 2023-07-21 11:17:25,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 5f80f4a8041231d3b2cf5ca364ce6791 move to jenkins-hbase17.apache.org,33011,1689938225358 record at close sequenceid=2 2023-07-21 11:17:25,610 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:25,613 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=5f80f4a8041231d3b2cf5ca364ce6791, regionState=CLOSED 2023-07-21 11:17:25,613 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938245613"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938245613"}]},"ts":"1689938245613"} 2023-07-21 11:17:25,615 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:25,615 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=af356d35b1676a8268d1151965bc707c, regionState=CLOSED 2023-07-21 11:17:25,615 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938245615"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938245615"}]},"ts":"1689938245615"} 2023-07-21 11:17:25,625 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=100, resume processing ppid=98 2023-07-21 11:17:25,625 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=100, ppid=98, state=SUCCESS; CloseRegionProcedure 5f80f4a8041231d3b2cf5ca364ce6791, server=jenkins-hbase17.apache.org,46255,1689938224878 in 197 msec 2023-07-21 11:17:25,626 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=97 2023-07-21 11:17:25,626 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5f80f4a8041231d3b2cf5ca364ce6791, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,33011,1689938225358; forceNewPlan=false, retain=false 2023-07-21 11:17:25,626 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=97, state=SUCCESS; CloseRegionProcedure af356d35b1676a8268d1151965bc707c, server=jenkins-hbase17.apache.org,35009,1689938231406 in 202 msec 2023-07-21 11:17:25,627 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=af356d35b1676a8268d1151965bc707c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,33011,1689938225358; forceNewPlan=false, retain=false 2023-07-21 11:17:25,777 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=5f80f4a8041231d3b2cf5ca364ce6791, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:25,777 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=af356d35b1676a8268d1151965bc707c, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:25,777 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938245776"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938245776"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938245776"}]},"ts":"1689938245776"} 2023-07-21 11:17:25,777 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938245776"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938245776"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938245776"}]},"ts":"1689938245776"} 2023-07-21 11:17:25,779 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=98, state=RUNNABLE; OpenRegionProcedure 5f80f4a8041231d3b2cf5ca364ce6791, server=jenkins-hbase17.apache.org,33011,1689938225358}] 2023-07-21 11:17:25,781 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=97, state=RUNNABLE; OpenRegionProcedure af356d35b1676a8268d1151965bc707c, server=jenkins-hbase17.apache.org,33011,1689938225358}] 2023-07-21 11:17:26,140 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:26,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5f80f4a8041231d3b2cf5ca364ce6791, NAME => 'GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:26,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:26,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:26,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:26,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:26,143 INFO [StoreOpener-5f80f4a8041231d3b2cf5ca364ce6791-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:26,145 DEBUG [StoreOpener-5f80f4a8041231d3b2cf5ca364ce6791-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791/f 2023-07-21 11:17:26,145 DEBUG [StoreOpener-5f80f4a8041231d3b2cf5ca364ce6791-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791/f 2023-07-21 11:17:26,146 INFO [StoreOpener-5f80f4a8041231d3b2cf5ca364ce6791-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5f80f4a8041231d3b2cf5ca364ce6791 columnFamilyName f 2023-07-21 11:17:26,147 INFO [StoreOpener-5f80f4a8041231d3b2cf5ca364ce6791-1] regionserver.HStore(310): Store=5f80f4a8041231d3b2cf5ca364ce6791/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:26,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:26,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:26,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:26,154 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 5f80f4a8041231d3b2cf5ca364ce6791; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11139846560, jitterRate=0.037479057908058167}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:26,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 5f80f4a8041231d3b2cf5ca364ce6791: 2023-07-21 11:17:26,166 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791., pid=101, masterSystemTime=1689938245931 2023-07-21 11:17:26,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:26,169 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:26,169 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:26,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => af356d35b1676a8268d1151965bc707c, NAME => 'GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:26,170 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=5f80f4a8041231d3b2cf5ca364ce6791, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:26,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:26,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:26,170 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938246170"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938246170"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938246170"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938246170"}]},"ts":"1689938246170"} 2023-07-21 11:17:26,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:26,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:26,172 INFO [StoreOpener-af356d35b1676a8268d1151965bc707c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:26,173 DEBUG [StoreOpener-af356d35b1676a8268d1151965bc707c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c/f 2023-07-21 11:17:26,174 DEBUG [StoreOpener-af356d35b1676a8268d1151965bc707c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c/f 2023-07-21 11:17:26,174 INFO [StoreOpener-af356d35b1676a8268d1151965bc707c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region af356d35b1676a8268d1151965bc707c columnFamilyName f 2023-07-21 11:17:26,175 INFO [StoreOpener-af356d35b1676a8268d1151965bc707c-1] regionserver.HStore(310): Store=af356d35b1676a8268d1151965bc707c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:26,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:26,176 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=98 2023-07-21 11:17:26,176 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=98, state=SUCCESS; OpenRegionProcedure 5f80f4a8041231d3b2cf5ca364ce6791, server=jenkins-hbase17.apache.org,33011,1689938225358 in 393 msec 2023-07-21 11:17:26,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:26,178 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=98, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5f80f4a8041231d3b2cf5ca364ce6791, REOPEN/MOVE in 759 msec 2023-07-21 11:17:26,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:26,184 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened af356d35b1676a8268d1151965bc707c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10024458880, jitterRate=-0.06639951467514038}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:26,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for af356d35b1676a8268d1151965bc707c: 2023-07-21 11:17:26,189 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c., pid=102, masterSystemTime=1689938245931 2023-07-21 11:17:26,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:26,191 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:26,192 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=af356d35b1676a8268d1151965bc707c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:26,192 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938246192"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938246192"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938246192"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938246192"}]},"ts":"1689938246192"} 2023-07-21 11:17:26,202 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=97 2023-07-21 11:17:26,203 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=97, state=SUCCESS; OpenRegionProcedure af356d35b1676a8268d1151965bc707c, server=jenkins-hbase17.apache.org,33011,1689938225358 in 419 msec 2023-07-21 11:17:26,204 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=af356d35b1676a8268d1151965bc707c, REOPEN/MOVE in 788 msec 2023-07-21 11:17:26,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure.ProcedureSyncWait(216): waitFor pid=97 2023-07-21 11:17:26,419 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1105779021. 2023-07-21 11:17:26,420 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:26,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:26,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:26,428 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-21 11:17:26,428 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:26,430 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-21 11:17:26,430 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:26,431 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:26,431 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:26,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1105779021 2023-07-21 11:17:26,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:26,434 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-21 11:17:26,434 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable GrouptestMultiTableMoveA 2023-07-21 11:17:26,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 11:17:26,438 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938246438"}]},"ts":"1689938246438"} 2023-07-21 11:17:26,440 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-21 11:17:26,441 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-21 11:17:26,442 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5f80f4a8041231d3b2cf5ca364ce6791, UNASSIGN}] 2023-07-21 11:17:26,444 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5f80f4a8041231d3b2cf5ca364ce6791, UNASSIGN 2023-07-21 11:17:26,444 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveB' 2023-07-21 11:17:26,445 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveA' 2023-07-21 11:17:26,445 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=5f80f4a8041231d3b2cf5ca364ce6791, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:26,445 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938246445"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938246445"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938246445"}]},"ts":"1689938246445"} 2023-07-21 11:17:26,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-21 11:17:26,447 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE; CloseRegionProcedure 5f80f4a8041231d3b2cf5ca364ce6791, server=jenkins-hbase17.apache.org,33011,1689938225358}] 2023-07-21 11:17:26,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-21 11:17:26,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:26,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 5f80f4a8041231d3b2cf5ca364ce6791, disabling compactions & flushes 2023-07-21 11:17:26,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:26,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:26,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. after waiting 0 ms 2023-07-21 11:17:26,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:26,641 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 11:17:26,642 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791. 2023-07-21 11:17:26,643 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 5f80f4a8041231d3b2cf5ca364ce6791: 2023-07-21 11:17:26,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:26,646 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=5f80f4a8041231d3b2cf5ca364ce6791, regionState=CLOSED 2023-07-21 11:17:26,647 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938246646"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938246646"}]},"ts":"1689938246646"} 2023-07-21 11:17:26,652 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-21 11:17:26,652 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; CloseRegionProcedure 5f80f4a8041231d3b2cf5ca364ce6791, server=jenkins-hbase17.apache.org,33011,1689938225358 in 201 msec 2023-07-21 11:17:26,655 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=103 2023-07-21 11:17:26,655 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5f80f4a8041231d3b2cf5ca364ce6791, UNASSIGN in 210 msec 2023-07-21 11:17:26,656 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938246656"}]},"ts":"1689938246656"} 2023-07-21 11:17:26,657 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-21 11:17:26,658 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-21 11:17:26,660 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 224 msec 2023-07-21 11:17:26,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-21 11:17:26,748 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 103 completed 2023-07-21 11:17:26,749 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete GrouptestMultiTableMoveA 2023-07-21 11:17:26,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 11:17:26,753 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 11:17:26,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1105779021' 2023-07-21 11:17:26,755 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=106, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 11:17:26,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:26,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1105779021 2023-07-21 11:17:26,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:26,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:26,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-21 11:17:26,774 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:26,777 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791/recovered.edits] 2023-07-21 11:17:26,786 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791/recovered.edits/7.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791/recovered.edits/7.seqid 2023-07-21 11:17:26,787 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveA/5f80f4a8041231d3b2cf5ca364ce6791 2023-07-21 11:17:26,787 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-21 11:17:26,793 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=106, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 11:17:26,814 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-21 11:17:26,842 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-21 11:17:26,845 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=106, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 11:17:26,845 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-21 11:17:26,845 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938246845"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:26,861 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 11:17:26,861 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5f80f4a8041231d3b2cf5ca364ce6791, NAME => 'GrouptestMultiTableMoveA,,1689938242652.5f80f4a8041231d3b2cf5ca364ce6791.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 11:17:26,861 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-21 11:17:26,862 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938246862"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:26,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-21 11:17:26,875 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-21 11:17:26,878 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=106, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 11:17:26,880 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 129 msec 2023-07-21 11:17:27,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-21 11:17:27,075 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-21 11:17:27,076 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-21 11:17:27,077 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable GrouptestMultiTableMoveB 2023-07-21 11:17:27,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 11:17:27,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-21 11:17:27,085 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938247085"}]},"ts":"1689938247085"} 2023-07-21 11:17:27,087 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-21 11:17:27,088 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-21 11:17:27,092 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=af356d35b1676a8268d1151965bc707c, UNASSIGN}] 2023-07-21 11:17:27,095 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=af356d35b1676a8268d1151965bc707c, UNASSIGN 2023-07-21 11:17:27,095 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=af356d35b1676a8268d1151965bc707c, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:27,096 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938247095"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938247095"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938247095"}]},"ts":"1689938247095"} 2023-07-21 11:17:27,097 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE; CloseRegionProcedure af356d35b1676a8268d1151965bc707c, server=jenkins-hbase17.apache.org,33011,1689938225358}] 2023-07-21 11:17:27,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-21 11:17:27,250 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:27,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing af356d35b1676a8268d1151965bc707c, disabling compactions & flushes 2023-07-21 11:17:27,251 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:27,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:27,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. after waiting 0 ms 2023-07-21 11:17:27,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:27,255 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 11:17:27,256 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c. 2023-07-21 11:17:27,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for af356d35b1676a8268d1151965bc707c: 2023-07-21 11:17:27,257 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:27,258 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=af356d35b1676a8268d1151965bc707c, regionState=CLOSED 2023-07-21 11:17:27,258 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689938247258"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938247258"}]},"ts":"1689938247258"} 2023-07-21 11:17:27,262 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-21 11:17:27,262 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; CloseRegionProcedure af356d35b1676a8268d1151965bc707c, server=jenkins-hbase17.apache.org,33011,1689938225358 in 162 msec 2023-07-21 11:17:27,264 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-21 11:17:27,264 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=af356d35b1676a8268d1151965bc707c, UNASSIGN in 173 msec 2023-07-21 11:17:27,267 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938247266"}]},"ts":"1689938247266"} 2023-07-21 11:17:27,268 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-21 11:17:27,269 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-21 11:17:27,272 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 193 msec 2023-07-21 11:17:27,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-21 11:17:27,387 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 107 completed 2023-07-21 11:17:27,388 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete GrouptestMultiTableMoveB 2023-07-21 11:17:27,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 11:17:27,391 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 11:17:27,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1105779021' 2023-07-21 11:17:27,391 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=110, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 11:17:27,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1105779021 2023-07-21 11:17:27,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:27,395 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:27,397 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c/recovered.edits] 2023-07-21 11:17:27,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 11:17:27,402 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c/recovered.edits/7.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c/recovered.edits/7.seqid 2023-07-21 11:17:27,402 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/GrouptestMultiTableMoveB/af356d35b1676a8268d1151965bc707c 2023-07-21 11:17:27,402 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-21 11:17:27,405 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=110, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 11:17:27,407 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-21 11:17:27,408 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-21 11:17:27,410 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=110, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 11:17:27,410 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-21 11:17:27,410 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938247410"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:27,413 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 11:17:27,413 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => af356d35b1676a8268d1151965bc707c, NAME => 'GrouptestMultiTableMoveB,,1689938244778.af356d35b1676a8268d1151965bc707c.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 11:17:27,413 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-21 11:17:27,413 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938247413"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:27,416 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-21 11:17:27,422 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=110, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 11:17:27,428 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 34 msec 2023-07-21 11:17:27,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 11:17:27,500 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-21 11:17:27,507 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,507 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,508 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:27,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:27,508 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:27,509 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33011] to rsgroup default 2023-07-21 11:17:27,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1105779021 2023-07-21 11:17:27,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:27,513 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1105779021, current retry=0 2023-07-21 11:17:27,513 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33011,1689938225358] are moved back to Group_testMultiTableMove_1105779021 2023-07-21 11:17:27,513 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1105779021 => default 2023-07-21 11:17:27,513 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:27,514 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_testMultiTableMove_1105779021 2023-07-21 11:17:27,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 11:17:27,520 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:27,521 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:27,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:27,521 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:27,522 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:27,522 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:27,522 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:27,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:27,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:27,530 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:27,531 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:27,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:27,535 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:27,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:27,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:27,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 515 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939447540, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:27,541 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:27,542 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:27,543 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,543 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,544 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:27,545 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:27,545 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:27,564 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=501 (was 506), OpenFileDescriptor=757 (was 789), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=674 (was 689), ProcessCount=186 (was 186), AvailableMemoryMB=1295 (was 1730) 2023-07-21 11:17:27,565 WARN [Listener at localhost.localdomain/38409] hbase.ResourceChecker(130): Thread=501 is superior to 500 2023-07-21 11:17:27,586 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=501, OpenFileDescriptor=757, MaxFileDescriptor=60000, SystemLoadAverage=674, ProcessCount=186, AvailableMemoryMB=1295 2023-07-21 11:17:27,586 WARN [Listener at localhost.localdomain/38409] hbase.ResourceChecker(130): Thread=501 is superior to 500 2023-07-21 11:17:27,587 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-21 11:17:27,592 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,592 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,593 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:27,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:27,593 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:27,594 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:27,594 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:27,595 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:27,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:27,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:27,603 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:27,604 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:27,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:27,618 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:27,623 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,623 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,626 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:27,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:27,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 543 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939447626, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:27,627 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:27,629 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:27,630 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,630 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,630 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:27,631 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:27,631 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:27,632 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:27,633 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:27,634 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup oldGroup 2023-07-21 11:17:27,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 11:17:27,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:27,645 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:27,649 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,650 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,656 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009] to rsgroup oldGroup 2023-07-21 11:17:27,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 11:17:27,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:27,662 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 11:17:27,663 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33011,1689938225358, jenkins-hbase17.apache.org,35009,1689938231406] are moved back to default 2023-07-21 11:17:27,663 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-21 11:17:27,663 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:27,666 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,666 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,668 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=oldGroup 2023-07-21 11:17:27,668 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:27,669 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=oldGroup 2023-07-21 11:17:27,669 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:27,671 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:27,671 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:27,672 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup anotherRSGroup 2023-07-21 11:17:27,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 11:17:27,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 11:17:27,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 11:17:27,678 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:27,683 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,683 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,687 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:36863] to rsgroup anotherRSGroup 2023-07-21 11:17:27,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 11:17:27,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 11:17:27,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 11:17:27,696 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 11:17:27,696 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,36863,1689938225106] are moved back to default 2023-07-21 11:17:27,696 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-21 11:17:27,696 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:27,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,721 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-21 11:17:27,721 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:27,722 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-21 11:17:27,722 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:27,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//136.243.18.41 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-21 11:17:27,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:27,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 113 connection: 136.243.18.41:42872 deadline: 1689939447729, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-21 11:17:27,733 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//136.243.18.41 rename rsgroup from oldGroup to anotherRSGroup 2023-07-21 11:17:27,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:27,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 579 service: MasterService methodName: ExecMasterService size: 106 connection: 136.243.18.41:42872 deadline: 1689939447733, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-21 11:17:27,734 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//136.243.18.41 rename rsgroup from default to newRSGroup2 2023-07-21 11:17:27,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:27,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 581 service: MasterService methodName: ExecMasterService size: 102 connection: 136.243.18.41:42872 deadline: 1689939447734, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-21 11:17:27,736 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//136.243.18.41 rename rsgroup from oldGroup to default 2023-07-21 11:17:27,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:27,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 583 service: MasterService methodName: ExecMasterService size: 99 connection: 136.243.18.41:42872 deadline: 1689939447735, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-21 11:17:27,748 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,749 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:27,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:27,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:27,751 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:36863] to rsgroup default 2023-07-21 11:17:27,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 11:17:27,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 11:17:27,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 11:17:27,762 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-21 11:17:27,762 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,36863,1689938225106] are moved back to anotherRSGroup 2023-07-21 11:17:27,762 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-21 11:17:27,763 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:27,763 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup anotherRSGroup 2023-07-21 11:17:27,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 11:17:27,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 11:17:27,770 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:27,771 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:27,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:27,771 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:27,772 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009] to rsgroup default 2023-07-21 11:17:27,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 11:17:27,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:27,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-21 11:17:27,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33011,1689938225358, jenkins-hbase17.apache.org,35009,1689938231406] are moved back to oldGroup 2023-07-21 11:17:27,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-21 11:17:27,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:27,777 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup oldGroup 2023-07-21 11:17:27,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 11:17:27,783 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:27,784 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:27,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:27,784 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:27,785 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:27,785 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:27,785 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:27,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:27,790 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:27,792 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:27,793 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:27,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:27,797 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:27,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:27,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:27,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 619 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939447802, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:27,802 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:27,803 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:27,804 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,804 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,805 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:27,805 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:27,805 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:27,825 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=504 (was 501) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=757 (was 757), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=674 (was 674), ProcessCount=186 (was 186), AvailableMemoryMB=1291 (was 1295) 2023-07-21 11:17:27,825 WARN [Listener at localhost.localdomain/38409] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-21 11:17:27,843 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=504, OpenFileDescriptor=757, MaxFileDescriptor=60000, SystemLoadAverage=674, ProcessCount=186, AvailableMemoryMB=1291 2023-07-21 11:17:27,846 WARN [Listener at localhost.localdomain/38409] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-21 11:17:27,846 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-21 11:17:27,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,851 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:27,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:27,851 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:27,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:27,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:27,853 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:27,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:27,859 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:27,861 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:27,862 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:27,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:27,866 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:27,870 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,870 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,872 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:27,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:27,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 647 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939447872, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:27,873 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:27,874 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:27,875 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,875 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,875 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:27,876 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:27,876 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:27,876 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:27,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:27,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup oldgroup 2023-07-21 11:17:27,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 11:17:27,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:27,882 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:27,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,888 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009] to rsgroup oldgroup 2023-07-21 11:17:27,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 11:17:27,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:27,892 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 11:17:27,892 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33011,1689938225358, jenkins-hbase17.apache.org,35009,1689938231406] are moved back to default 2023-07-21 11:17:27,892 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-21 11:17:27,892 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:27,895 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:27,895 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:27,898 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=oldgroup 2023-07-21 11:17:27,898 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:27,899 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:27,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-21 11:17:27,902 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:27,902 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 111 2023-07-21 11:17:27,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-21 11:17:27,904 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 11:17:27,905 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:27,905 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:27,905 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:27,909 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:27,910 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/testRename/7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:27,911 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/testRename/7155e278007d2c2a97378c786865c2c6 empty. 2023-07-21 11:17:27,911 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/testRename/7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:27,911 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-21 11:17:27,936 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:27,938 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7155e278007d2c2a97378c786865c2c6, NAME => 'testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:27,953 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:27,953 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 7155e278007d2c2a97378c786865c2c6, disabling compactions & flushes 2023-07-21 11:17:27,953 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:27,954 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:27,954 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. after waiting 0 ms 2023-07-21 11:17:27,954 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:27,954 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:27,954 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 7155e278007d2c2a97378c786865c2c6: 2023-07-21 11:17:27,956 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:27,957 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689938247957"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938247957"}]},"ts":"1689938247957"} 2023-07-21 11:17:27,958 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:27,959 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:27,959 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938247959"}]},"ts":"1689938247959"} 2023-07-21 11:17:27,960 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-21 11:17:27,963 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:27,963 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:27,963 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:27,963 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:27,963 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=7155e278007d2c2a97378c786865c2c6, ASSIGN}] 2023-07-21 11:17:27,966 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=7155e278007d2c2a97378c786865c2c6, ASSIGN 2023-07-21 11:17:27,966 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=7155e278007d2c2a97378c786865c2c6, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36863,1689938225106; forceNewPlan=false, retain=false 2023-07-21 11:17:28,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-21 11:17:28,117 INFO [jenkins-hbase17:40703] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:28,118 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=7155e278007d2c2a97378c786865c2c6, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:28,118 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689938248118"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938248118"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938248118"}]},"ts":"1689938248118"} 2023-07-21 11:17:28,120 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=112, state=RUNNABLE; OpenRegionProcedure 7155e278007d2c2a97378c786865c2c6, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:28,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-21 11:17:28,275 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:28,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7155e278007d2c2a97378c786865c2c6, NAME => 'testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:28,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:28,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:28,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:28,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:28,277 INFO [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:28,278 DEBUG [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6/tr 2023-07-21 11:17:28,278 DEBUG [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6/tr 2023-07-21 11:17:28,278 INFO [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7155e278007d2c2a97378c786865c2c6 columnFamilyName tr 2023-07-21 11:17:28,279 INFO [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] regionserver.HStore(310): Store=7155e278007d2c2a97378c786865c2c6/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:28,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:28,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:28,283 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:28,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:28,285 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 7155e278007d2c2a97378c786865c2c6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11990528960, jitterRate=0.1167050302028656}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:28,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 7155e278007d2c2a97378c786865c2c6: 2023-07-21 11:17:28,286 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6., pid=113, masterSystemTime=1689938248272 2023-07-21 11:17:28,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:28,288 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:28,288 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=7155e278007d2c2a97378c786865c2c6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:28,288 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689938248288"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938248288"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938248288"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938248288"}]},"ts":"1689938248288"} 2023-07-21 11:17:28,291 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=112 2023-07-21 11:17:28,291 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=112, state=SUCCESS; OpenRegionProcedure 7155e278007d2c2a97378c786865c2c6, server=jenkins-hbase17.apache.org,36863,1689938225106 in 169 msec 2023-07-21 11:17:28,292 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-21 11:17:28,292 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=7155e278007d2c2a97378c786865c2c6, ASSIGN in 328 msec 2023-07-21 11:17:28,293 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:28,293 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938248293"}]},"ts":"1689938248293"} 2023-07-21 11:17:28,294 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-21 11:17:28,296 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:28,297 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; CreateTableProcedure table=testRename in 397 msec 2023-07-21 11:17:28,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-21 11:17:28,507 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 111 completed 2023-07-21 11:17:28,507 DEBUG [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-21 11:17:28,507 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:28,511 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-21 11:17:28,511 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:28,511 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-21 11:17:28,513 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [testRename] to rsgroup oldgroup 2023-07-21 11:17:28,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 11:17:28,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:28,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:28,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:28,518 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-21 11:17:28,518 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(345): Moving region 7155e278007d2c2a97378c786865c2c6 to RSGroup oldgroup 2023-07-21 11:17:28,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:28,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:28,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:28,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:28,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:28,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=7155e278007d2c2a97378c786865c2c6, REOPEN/MOVE 2023-07-21 11:17:28,519 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-21 11:17:28,519 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=7155e278007d2c2a97378c786865c2c6, REOPEN/MOVE 2023-07-21 11:17:28,520 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=7155e278007d2c2a97378c786865c2c6, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:28,520 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689938248520"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938248520"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938248520"}]},"ts":"1689938248520"} 2023-07-21 11:17:28,521 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure 7155e278007d2c2a97378c786865c2c6, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:28,674 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:28,675 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 7155e278007d2c2a97378c786865c2c6, disabling compactions & flushes 2023-07-21 11:17:28,675 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:28,675 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:28,675 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. after waiting 0 ms 2023-07-21 11:17:28,675 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:28,679 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:28,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:28,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 7155e278007d2c2a97378c786865c2c6: 2023-07-21 11:17:28,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 7155e278007d2c2a97378c786865c2c6 move to jenkins-hbase17.apache.org,35009,1689938231406 record at close sequenceid=2 2023-07-21 11:17:28,682 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:28,682 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=7155e278007d2c2a97378c786865c2c6, regionState=CLOSED 2023-07-21 11:17:28,682 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689938248682"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938248682"}]},"ts":"1689938248682"} 2023-07-21 11:17:28,685 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-21 11:17:28,685 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure 7155e278007d2c2a97378c786865c2c6, server=jenkins-hbase17.apache.org,36863,1689938225106 in 163 msec 2023-07-21 11:17:28,686 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=7155e278007d2c2a97378c786865c2c6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,35009,1689938231406; forceNewPlan=false, retain=false 2023-07-21 11:17:28,836 INFO [jenkins-hbase17:40703] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:28,836 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=7155e278007d2c2a97378c786865c2c6, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:28,837 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689938248836"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938248836"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938248836"}]},"ts":"1689938248836"} 2023-07-21 11:17:28,839 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=114, state=RUNNABLE; OpenRegionProcedure 7155e278007d2c2a97378c786865c2c6, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:28,995 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:28,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7155e278007d2c2a97378c786865c2c6, NAME => 'testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:28,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:28,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:28,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:28,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:29,012 INFO [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:29,014 DEBUG [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6/tr 2023-07-21 11:17:29,015 DEBUG [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6/tr 2023-07-21 11:17:29,015 INFO [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7155e278007d2c2a97378c786865c2c6 columnFamilyName tr 2023-07-21 11:17:29,016 INFO [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] regionserver.HStore(310): Store=7155e278007d2c2a97378c786865c2c6/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:29,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:29,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:29,029 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:29,030 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 7155e278007d2c2a97378c786865c2c6; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10548017120, jitterRate=-0.01763935387134552}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:29,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 7155e278007d2c2a97378c786865c2c6: 2023-07-21 11:17:29,031 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6., pid=116, masterSystemTime=1689938248990 2023-07-21 11:17:29,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:29,033 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:29,033 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=7155e278007d2c2a97378c786865c2c6, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:29,033 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689938249033"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938249033"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938249033"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938249033"}]},"ts":"1689938249033"} 2023-07-21 11:17:29,039 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=114 2023-07-21 11:17:29,040 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=114, state=SUCCESS; OpenRegionProcedure 7155e278007d2c2a97378c786865c2c6, server=jenkins-hbase17.apache.org,35009,1689938231406 in 199 msec 2023-07-21 11:17:29,042 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=7155e278007d2c2a97378c786865c2c6, REOPEN/MOVE in 522 msec 2023-07-21 11:17:29,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure.ProcedureSyncWait(216): waitFor pid=114 2023-07-21 11:17:29,519 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-21 11:17:29,519 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:29,523 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:29,523 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:29,526 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:29,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=testRename 2023-07-21 11:17:29,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:29,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=oldgroup 2023-07-21 11:17:29,528 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:29,529 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=testRename 2023-07-21 11:17:29,529 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:29,530 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:29,530 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:29,531 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup normal 2023-07-21 11:17:29,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 11:17:29,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 11:17:29,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:29,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:29,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 11:17:29,536 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:29,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:29,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:29,542 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:36863] to rsgroup normal 2023-07-21 11:17:29,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 11:17:29,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 11:17:29,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:29,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:29,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 11:17:29,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 11:17:29,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,36863,1689938225106] are moved back to default 2023-07-21 11:17:29,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-21 11:17:29,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:29,551 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:29,551 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:29,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=normal 2023-07-21 11:17:29,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:29,558 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:29,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-21 11:17:29,562 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:29,562 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 117 2023-07-21 11:17:29,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-21 11:17:29,569 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 11:17:29,569 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 11:17:29,570 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:29,570 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:29,570 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 11:17:29,573 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:29,575 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:29,575 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437 empty. 2023-07-21 11:17:29,576 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:29,576 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-21 11:17:29,604 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:29,608 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => dac93182b0e7c37b865b422b78986437, NAME => 'unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:29,630 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:29,630 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing dac93182b0e7c37b865b422b78986437, disabling compactions & flushes 2023-07-21 11:17:29,630 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:29,630 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:29,630 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. after waiting 0 ms 2023-07-21 11:17:29,630 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:29,630 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:29,630 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for dac93182b0e7c37b865b422b78986437: 2023-07-21 11:17:29,633 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:29,634 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689938249634"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938249634"}]},"ts":"1689938249634"} 2023-07-21 11:17:29,635 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:29,637 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:29,637 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938249637"}]},"ts":"1689938249637"} 2023-07-21 11:17:29,638 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-21 11:17:29,641 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=dac93182b0e7c37b865b422b78986437, ASSIGN}] 2023-07-21 11:17:29,646 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=dac93182b0e7c37b865b422b78986437, ASSIGN 2023-07-21 11:17:29,647 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=dac93182b0e7c37b865b422b78986437, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46255,1689938224878; forceNewPlan=false, retain=false 2023-07-21 11:17:29,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-21 11:17:29,732 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-21 11:17:29,799 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=dac93182b0e7c37b865b422b78986437, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:29,799 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689938249799"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938249799"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938249799"}]},"ts":"1689938249799"} 2023-07-21 11:17:29,801 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure dac93182b0e7c37b865b422b78986437, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:29,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-21 11:17:29,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:29,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dac93182b0e7c37b865b422b78986437, NAME => 'unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:29,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:29,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:29,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:29,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:29,960 INFO [StoreOpener-dac93182b0e7c37b865b422b78986437-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:29,962 DEBUG [StoreOpener-dac93182b0e7c37b865b422b78986437-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437/ut 2023-07-21 11:17:29,962 DEBUG [StoreOpener-dac93182b0e7c37b865b422b78986437-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437/ut 2023-07-21 11:17:29,963 INFO [StoreOpener-dac93182b0e7c37b865b422b78986437-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dac93182b0e7c37b865b422b78986437 columnFamilyName ut 2023-07-21 11:17:29,963 INFO [StoreOpener-dac93182b0e7c37b865b422b78986437-1] regionserver.HStore(310): Store=dac93182b0e7c37b865b422b78986437/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:29,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:29,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:29,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:29,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:29,971 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened dac93182b0e7c37b865b422b78986437; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9445777920, jitterRate=-0.12029337882995605}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:29,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for dac93182b0e7c37b865b422b78986437: 2023-07-21 11:17:29,972 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437., pid=119, masterSystemTime=1689938249953 2023-07-21 11:17:29,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:29,974 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:29,975 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=dac93182b0e7c37b865b422b78986437, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:29,975 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689938249975"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938249975"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938249975"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938249975"}]},"ts":"1689938249975"} 2023-07-21 11:17:29,981 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-21 11:17:29,981 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure dac93182b0e7c37b865b422b78986437, server=jenkins-hbase17.apache.org,46255,1689938224878 in 177 msec 2023-07-21 11:17:29,985 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-21 11:17:29,985 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=dac93182b0e7c37b865b422b78986437, ASSIGN in 340 msec 2023-07-21 11:17:29,986 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:29,986 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938249986"}]},"ts":"1689938249986"} 2023-07-21 11:17:29,988 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-21 11:17:29,991 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:29,993 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=unmovedTable in 432 msec 2023-07-21 11:17:30,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-21 11:17:30,167 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 117 completed 2023-07-21 11:17:30,167 DEBUG [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-21 11:17:30,167 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:30,170 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-21 11:17:30,170 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:30,170 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-21 11:17:30,172 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [unmovedTable] to rsgroup normal 2023-07-21 11:17:30,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 11:17:30,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 11:17:30,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:30,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:30,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 11:17:30,176 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-21 11:17:30,176 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(345): Moving region dac93182b0e7c37b865b422b78986437 to RSGroup normal 2023-07-21 11:17:30,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=dac93182b0e7c37b865b422b78986437, REOPEN/MOVE 2023-07-21 11:17:30,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-21 11:17:30,177 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=dac93182b0e7c37b865b422b78986437, REOPEN/MOVE 2023-07-21 11:17:30,178 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=dac93182b0e7c37b865b422b78986437, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:30,178 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689938250178"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938250178"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938250178"}]},"ts":"1689938250178"} 2023-07-21 11:17:30,179 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure dac93182b0e7c37b865b422b78986437, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:30,333 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:30,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing dac93182b0e7c37b865b422b78986437, disabling compactions & flushes 2023-07-21 11:17:30,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:30,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:30,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. after waiting 0 ms 2023-07-21 11:17:30,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:30,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:30,339 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:30,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for dac93182b0e7c37b865b422b78986437: 2023-07-21 11:17:30,339 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding dac93182b0e7c37b865b422b78986437 move to jenkins-hbase17.apache.org,36863,1689938225106 record at close sequenceid=2 2023-07-21 11:17:30,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:30,341 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=dac93182b0e7c37b865b422b78986437, regionState=CLOSED 2023-07-21 11:17:30,341 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689938250341"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938250341"}]},"ts":"1689938250341"} 2023-07-21 11:17:30,344 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-21 11:17:30,344 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure dac93182b0e7c37b865b422b78986437, server=jenkins-hbase17.apache.org,46255,1689938224878 in 164 msec 2023-07-21 11:17:30,345 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=dac93182b0e7c37b865b422b78986437, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,36863,1689938225106; forceNewPlan=false, retain=false 2023-07-21 11:17:30,496 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=dac93182b0e7c37b865b422b78986437, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:30,497 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689938250496"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938250496"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938250496"}]},"ts":"1689938250496"} 2023-07-21 11:17:30,499 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure dac93182b0e7c37b865b422b78986437, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:30,648 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 11:17:30,668 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:30,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dac93182b0e7c37b865b422b78986437, NAME => 'unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:30,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:30,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:30,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:30,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:30,671 INFO [StoreOpener-dac93182b0e7c37b865b422b78986437-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:30,673 DEBUG [StoreOpener-dac93182b0e7c37b865b422b78986437-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437/ut 2023-07-21 11:17:30,673 DEBUG [StoreOpener-dac93182b0e7c37b865b422b78986437-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437/ut 2023-07-21 11:17:30,674 INFO [StoreOpener-dac93182b0e7c37b865b422b78986437-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dac93182b0e7c37b865b422b78986437 columnFamilyName ut 2023-07-21 11:17:30,675 INFO [StoreOpener-dac93182b0e7c37b865b422b78986437-1] regionserver.HStore(310): Store=dac93182b0e7c37b865b422b78986437/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:30,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:30,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:30,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:30,692 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened dac93182b0e7c37b865b422b78986437; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10330346560, jitterRate=-0.03791150450706482}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:30,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for dac93182b0e7c37b865b422b78986437: 2023-07-21 11:17:30,695 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437., pid=122, masterSystemTime=1689938250656 2023-07-21 11:17:30,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:30,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:30,698 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=dac93182b0e7c37b865b422b78986437, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:30,698 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689938250698"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938250698"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938250698"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938250698"}]},"ts":"1689938250698"} 2023-07-21 11:17:30,707 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-21 11:17:30,707 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure dac93182b0e7c37b865b422b78986437, server=jenkins-hbase17.apache.org,36863,1689938225106 in 201 msec 2023-07-21 11:17:30,709 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=dac93182b0e7c37b865b422b78986437, REOPEN/MOVE in 530 msec 2023-07-21 11:17:31,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-21 11:17:31,178 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-21 11:17:31,178 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:31,185 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:31,185 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:31,188 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:31,188 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 11:17:31,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:31,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=normal 2023-07-21 11:17:31,190 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:31,191 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 11:17:31,191 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:31,192 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//136.243.18.41 rename rsgroup from oldgroup to newgroup 2023-07-21 11:17:31,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 11:17:31,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:31,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:31,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 11:17:31,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-21 11:17:31,200 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RenameRSGroup 2023-07-21 11:17:31,203 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:31,203 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:31,206 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=newgroup 2023-07-21 11:17:31,206 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:31,207 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=testRename 2023-07-21 11:17:31,207 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:31,207 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 11:17:31,207 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:31,211 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:31,211 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:31,213 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [unmovedTable] to rsgroup default 2023-07-21 11:17:31,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 11:17:31,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:31,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:31,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 11:17:31,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 11:17:31,220 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-21 11:17:31,220 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(345): Moving region dac93182b0e7c37b865b422b78986437 to RSGroup default 2023-07-21 11:17:31,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=dac93182b0e7c37b865b422b78986437, REOPEN/MOVE 2023-07-21 11:17:31,221 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 11:17:31,221 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=dac93182b0e7c37b865b422b78986437, REOPEN/MOVE 2023-07-21 11:17:31,222 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=dac93182b0e7c37b865b422b78986437, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:31,222 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689938251222"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938251222"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938251222"}]},"ts":"1689938251222"} 2023-07-21 11:17:31,223 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure dac93182b0e7c37b865b422b78986437, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:31,376 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:31,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing dac93182b0e7c37b865b422b78986437, disabling compactions & flushes 2023-07-21 11:17:31,377 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:31,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:31,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. after waiting 0 ms 2023-07-21 11:17:31,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:31,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 11:17:31,382 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:31,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for dac93182b0e7c37b865b422b78986437: 2023-07-21 11:17:31,383 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding dac93182b0e7c37b865b422b78986437 move to jenkins-hbase17.apache.org,46255,1689938224878 record at close sequenceid=5 2023-07-21 11:17:31,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:31,386 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=dac93182b0e7c37b865b422b78986437, regionState=CLOSED 2023-07-21 11:17:31,386 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689938251386"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938251386"}]},"ts":"1689938251386"} 2023-07-21 11:17:31,390 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-21 11:17:31,390 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure dac93182b0e7c37b865b422b78986437, server=jenkins-hbase17.apache.org,36863,1689938225106 in 166 msec 2023-07-21 11:17:31,391 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=dac93182b0e7c37b865b422b78986437, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,46255,1689938224878; forceNewPlan=false, retain=false 2023-07-21 11:17:31,541 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=dac93182b0e7c37b865b422b78986437, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:31,542 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689938251541"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938251541"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938251541"}]},"ts":"1689938251541"} 2023-07-21 11:17:31,543 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure dac93182b0e7c37b865b422b78986437, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:31,705 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:31,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dac93182b0e7c37b865b422b78986437, NAME => 'unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:31,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:31,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:31,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:31,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:31,707 INFO [StoreOpener-dac93182b0e7c37b865b422b78986437-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:31,710 DEBUG [StoreOpener-dac93182b0e7c37b865b422b78986437-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437/ut 2023-07-21 11:17:31,710 DEBUG [StoreOpener-dac93182b0e7c37b865b422b78986437-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437/ut 2023-07-21 11:17:31,710 INFO [StoreOpener-dac93182b0e7c37b865b422b78986437-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dac93182b0e7c37b865b422b78986437 columnFamilyName ut 2023-07-21 11:17:31,712 INFO [StoreOpener-dac93182b0e7c37b865b422b78986437-1] regionserver.HStore(310): Store=dac93182b0e7c37b865b422b78986437/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:31,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:31,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:31,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:31,730 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened dac93182b0e7c37b865b422b78986437; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9613549920, jitterRate=-0.10466839373111725}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:31,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for dac93182b0e7c37b865b422b78986437: 2023-07-21 11:17:31,731 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437., pid=125, masterSystemTime=1689938251695 2023-07-21 11:17:31,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:31,734 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:31,734 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=dac93182b0e7c37b865b422b78986437, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:31,734 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689938251734"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938251734"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938251734"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938251734"}]},"ts":"1689938251734"} 2023-07-21 11:17:31,743 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-21 11:17:31,743 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure dac93182b0e7c37b865b422b78986437, server=jenkins-hbase17.apache.org,46255,1689938224878 in 194 msec 2023-07-21 11:17:31,745 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=dac93182b0e7c37b865b422b78986437, REOPEN/MOVE in 522 msec 2023-07-21 11:17:32,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-21 11:17:32,222 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-21 11:17:32,222 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:32,224 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:36863] to rsgroup default 2023-07-21 11:17:32,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 11:17:32,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:32,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:32,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 11:17:32,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 11:17:32,230 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-21 11:17:32,230 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,36863,1689938225106] are moved back to normal 2023-07-21 11:17:32,230 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-21 11:17:32,231 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:32,231 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup normal 2023-07-21 11:17:32,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:32,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:32,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 11:17:32,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 11:17:32,245 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:32,246 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:32,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:32,246 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:32,247 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:32,247 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:32,248 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:32,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:32,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 11:17:32,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 11:17:32,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:32,259 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [testRename] to rsgroup default 2023-07-21 11:17:32,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:32,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 11:17:32,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:32,264 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-21 11:17:32,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(345): Moving region 7155e278007d2c2a97378c786865c2c6 to RSGroup default 2023-07-21 11:17:32,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=7155e278007d2c2a97378c786865c2c6, REOPEN/MOVE 2023-07-21 11:17:32,266 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 11:17:32,266 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=7155e278007d2c2a97378c786865c2c6, REOPEN/MOVE 2023-07-21 11:17:32,266 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=7155e278007d2c2a97378c786865c2c6, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:32,267 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689938252266"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938252266"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938252266"}]},"ts":"1689938252266"} 2023-07-21 11:17:32,268 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 7155e278007d2c2a97378c786865c2c6, server=jenkins-hbase17.apache.org,35009,1689938231406}] 2023-07-21 11:17:32,421 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:32,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 7155e278007d2c2a97378c786865c2c6, disabling compactions & flushes 2023-07-21 11:17:32,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:32,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:32,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. after waiting 0 ms 2023-07-21 11:17:32,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:32,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 11:17:32,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:32,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 7155e278007d2c2a97378c786865c2c6: 2023-07-21 11:17:32,443 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 7155e278007d2c2a97378c786865c2c6 move to jenkins-hbase17.apache.org,36863,1689938225106 record at close sequenceid=5 2023-07-21 11:17:32,444 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:32,445 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=7155e278007d2c2a97378c786865c2c6, regionState=CLOSED 2023-07-21 11:17:32,445 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689938252445"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938252445"}]},"ts":"1689938252445"} 2023-07-21 11:17:32,449 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-21 11:17:32,453 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-21 11:17:32,453 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 7155e278007d2c2a97378c786865c2c6, server=jenkins-hbase17.apache.org,35009,1689938231406 in 179 msec 2023-07-21 11:17:32,454 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=7155e278007d2c2a97378c786865c2c6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,36863,1689938225106; forceNewPlan=false, retain=false 2023-07-21 11:17:32,604 INFO [jenkins-hbase17:40703] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:32,605 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=7155e278007d2c2a97378c786865c2c6, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:32,605 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689938252605"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938252605"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938252605"}]},"ts":"1689938252605"} 2023-07-21 11:17:32,607 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 7155e278007d2c2a97378c786865c2c6, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:32,764 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:32,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7155e278007d2c2a97378c786865c2c6, NAME => 'testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:32,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:32,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:32,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:32,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:32,766 INFO [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:32,767 DEBUG [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6/tr 2023-07-21 11:17:32,767 DEBUG [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6/tr 2023-07-21 11:17:32,768 INFO [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7155e278007d2c2a97378c786865c2c6 columnFamilyName tr 2023-07-21 11:17:32,768 INFO [StoreOpener-7155e278007d2c2a97378c786865c2c6-1] regionserver.HStore(310): Store=7155e278007d2c2a97378c786865c2c6/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:32,769 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:32,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:32,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:32,775 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 7155e278007d2c2a97378c786865c2c6; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11101538080, jitterRate=0.03391130268573761}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:32,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 7155e278007d2c2a97378c786865c2c6: 2023-07-21 11:17:32,776 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6., pid=128, masterSystemTime=1689938252760 2023-07-21 11:17:32,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:32,778 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:32,778 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=7155e278007d2c2a97378c786865c2c6, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:32,778 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689938252778"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938252778"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938252778"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938252778"}]},"ts":"1689938252778"} 2023-07-21 11:17:32,782 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-21 11:17:32,782 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 7155e278007d2c2a97378c786865c2c6, server=jenkins-hbase17.apache.org,36863,1689938225106 in 173 msec 2023-07-21 11:17:32,784 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=7155e278007d2c2a97378c786865c2c6, REOPEN/MOVE in 517 msec 2023-07-21 11:17:33,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-21 11:17:33,266 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-21 11:17:33,266 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:33,267 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009] to rsgroup default 2023-07-21 11:17:33,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:33,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 11:17:33,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:33,271 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-21 11:17:33,271 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33011,1689938225358, jenkins-hbase17.apache.org,35009,1689938231406] are moved back to newgroup 2023-07-21 11:17:33,271 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-21 11:17:33,271 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:33,272 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup newgroup 2023-07-21 11:17:33,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:33,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:33,278 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:33,280 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:33,281 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:33,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:33,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:33,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:33,289 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:33,293 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:33,294 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:33,296 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:33,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:33,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 767 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939453296, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:33,297 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:33,299 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:33,300 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:33,300 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:33,300 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:33,301 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:33,301 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:33,322 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=497 (was 504), OpenFileDescriptor=746 (was 757), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=644 (was 674), ProcessCount=184 (was 186), AvailableMemoryMB=3454 (was 1291) - AvailableMemoryMB LEAK? - 2023-07-21 11:17:33,341 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=497, OpenFileDescriptor=746, MaxFileDescriptor=60000, SystemLoadAverage=644, ProcessCount=184, AvailableMemoryMB=3453 2023-07-21 11:17:33,341 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-21 11:17:33,347 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:33,347 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:33,348 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:33,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:33,348 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:33,348 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:33,348 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:33,349 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:33,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:33,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:33,356 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:33,365 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:33,367 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:33,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:33,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:33,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:33,376 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:33,380 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:33,380 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:33,385 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:33,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:33,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 795 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939453384, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:33,386 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:33,388 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:33,391 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:33,391 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:33,392 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:33,394 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:33,394 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:33,398 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=nonexistent 2023-07-21 11:17:33,398 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:33,405 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, server=bogus:123 2023-07-21 11:17:33,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-21 11:17:33,407 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=bogus 2023-07-21 11:17:33,407 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:33,408 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup bogus 2023-07-21 11:17:33,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:33,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 87 connection: 136.243.18.41:42872 deadline: 1689939453408, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-21 11:17:33,411 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [bogus:123] to rsgroup bogus 2023-07-21 11:17:33,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:33,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 810 service: MasterService methodName: ExecMasterService size: 96 connection: 136.243.18.41:42872 deadline: 1689939453411, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-21 11:17:33,416 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-21 11:17:33,417 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=true 2023-07-21 11:17:33,425 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//136.243.18.41 balance rsgroup, group=bogus 2023-07-21 11:17:33,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:33,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 814 service: MasterService methodName: ExecMasterService size: 88 connection: 136.243.18.41:42872 deadline: 1689939453424, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-21 11:17:33,428 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:33,429 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:33,429 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:33,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:33,429 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:33,430 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:33,430 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:33,431 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:33,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:33,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:33,436 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:33,439 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:33,440 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:33,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:33,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:33,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:33,467 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:33,471 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:33,471 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:33,474 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:33,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:33,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 838 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939453473, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:33,478 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:33,479 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:33,480 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:33,481 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:33,481 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:33,482 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:33,482 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:33,507 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=501 (was 497) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-30 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57c5acaf-shared-pool-29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=746 (was 746), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=644 (was 644), ProcessCount=184 (was 184), AvailableMemoryMB=3443 (was 3453) 2023-07-21 11:17:33,507 WARN [Listener at localhost.localdomain/38409] hbase.ResourceChecker(130): Thread=501 is superior to 500 2023-07-21 11:17:33,530 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=501, OpenFileDescriptor=746, MaxFileDescriptor=60000, SystemLoadAverage=644, ProcessCount=184, AvailableMemoryMB=3441 2023-07-21 11:17:33,530 WARN [Listener at localhost.localdomain/38409] hbase.ResourceChecker(130): Thread=501 is superior to 500 2023-07-21 11:17:33,531 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-21 11:17:33,537 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:33,537 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:33,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:33,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:33,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:33,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:33,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:33,541 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:33,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:33,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:33,546 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:33,550 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:33,550 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:33,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:33,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:33,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:33,562 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:33,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:33,567 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:33,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:33,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:33,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] ipc.CallRunner(144): callId: 866 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939453574, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:33,575 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:33,577 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:33,578 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:33,579 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:33,579 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:33,580 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:33,580 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:33,581 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:33,581 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:33,587 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup Group_testDisabledTableMove_1426909851 2023-07-21 11:17:33,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1426909851 2023-07-21 11:17:33,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:33,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:33,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:33,596 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:33,603 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:33,603 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:33,611 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009] to rsgroup Group_testDisabledTableMove_1426909851 2023-07-21 11:17:33,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1426909851 2023-07-21 11:17:33,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:33,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:33,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:33,619 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 11:17:33,619 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33011,1689938225358, jenkins-hbase17.apache.org,35009,1689938231406] are moved back to default 2023-07-21 11:17:33,619 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1426909851 2023-07-21 11:17:33,619 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:33,623 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:33,624 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:33,639 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1426909851 2023-07-21 11:17:33,639 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:33,642 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:33,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-21 11:17:33,650 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:33,650 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40703] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 129 2023-07-21 11:17:33,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-21 11:17:33,653 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1426909851 2023-07-21 11:17:33,661 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:33,664 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:33,665 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:33,670 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:33,677 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982 2023-07-21 11:17:33,678 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3 2023-07-21 11:17:33,678 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1 2023-07-21 11:17:33,678 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f 2023-07-21 11:17:33,678 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c 2023-07-21 11:17:33,679 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982 empty. 2023-07-21 11:17:33,680 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3 empty. 2023-07-21 11:17:33,680 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1 empty. 2023-07-21 11:17:33,680 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c empty. 2023-07-21 11:17:33,680 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f empty. 2023-07-21 11:17:33,680 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982 2023-07-21 11:17:33,680 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c 2023-07-21 11:17:33,680 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3 2023-07-21 11:17:33,680 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1 2023-07-21 11:17:33,681 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f 2023-07-21 11:17:33,681 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-21 11:17:33,728 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:33,729 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 7c77e20d27514259a2f6abd58ecb8eb1, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:33,729 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 20662e4645c6626e1e36b48bf00b79b3, NAME => 'Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:33,729 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 79a7a6396c42f41507da8db17214c982, NAME => 'Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:33,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-21 11:17:33,774 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:33,774 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 79a7a6396c42f41507da8db17214c982, disabling compactions & flushes 2023-07-21 11:17:33,774 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982. 2023-07-21 11:17:33,774 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982. 2023-07-21 11:17:33,774 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982. after waiting 0 ms 2023-07-21 11:17:33,774 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982. 2023-07-21 11:17:33,774 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982. 2023-07-21 11:17:33,774 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 79a7a6396c42f41507da8db17214c982: 2023-07-21 11:17:33,775 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4e2c99ee00b48b6a06cf7330d2a34d3f, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:33,776 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:33,777 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 20662e4645c6626e1e36b48bf00b79b3, disabling compactions & flushes 2023-07-21 11:17:33,777 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3. 2023-07-21 11:17:33,777 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3. 2023-07-21 11:17:33,777 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3. after waiting 0 ms 2023-07-21 11:17:33,777 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3. 2023-07-21 11:17:33,777 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3. 2023-07-21 11:17:33,777 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 20662e4645c6626e1e36b48bf00b79b3: 2023-07-21 11:17:33,777 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 39e7abb37b753473837a82599c2bd27c, NAME => 'Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp 2023-07-21 11:17:33,778 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:33,779 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 7c77e20d27514259a2f6abd58ecb8eb1, disabling compactions & flushes 2023-07-21 11:17:33,779 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1. 2023-07-21 11:17:33,779 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1. 2023-07-21 11:17:33,779 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1. after waiting 0 ms 2023-07-21 11:17:33,779 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1. 2023-07-21 11:17:33,779 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1. 2023-07-21 11:17:33,779 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 7c77e20d27514259a2f6abd58ecb8eb1: 2023-07-21 11:17:33,803 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:33,804 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 4e2c99ee00b48b6a06cf7330d2a34d3f, disabling compactions & flushes 2023-07-21 11:17:33,804 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f. 2023-07-21 11:17:33,804 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f. 2023-07-21 11:17:33,804 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f. after waiting 0 ms 2023-07-21 11:17:33,804 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f. 2023-07-21 11:17:33,804 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f. 2023-07-21 11:17:33,804 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 4e2c99ee00b48b6a06cf7330d2a34d3f: 2023-07-21 11:17:33,804 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:33,804 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 39e7abb37b753473837a82599c2bd27c, disabling compactions & flushes 2023-07-21 11:17:33,804 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c. 2023-07-21 11:17:33,805 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c. 2023-07-21 11:17:33,805 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c. after waiting 0 ms 2023-07-21 11:17:33,805 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c. 2023-07-21 11:17:33,805 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c. 2023-07-21 11:17:33,805 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 39e7abb37b753473837a82599c2bd27c: 2023-07-21 11:17:33,807 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:33,808 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938253808"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938253808"}]},"ts":"1689938253808"} 2023-07-21 11:17:33,808 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938253808"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938253808"}]},"ts":"1689938253808"} 2023-07-21 11:17:33,808 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938253808"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938253808"}]},"ts":"1689938253808"} 2023-07-21 11:17:33,808 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938253808"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938253808"}]},"ts":"1689938253808"} 2023-07-21 11:17:33,808 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938253808"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938253808"}]},"ts":"1689938253808"} 2023-07-21 11:17:33,811 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 11:17:33,812 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:33,812 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938253812"}]},"ts":"1689938253812"} 2023-07-21 11:17:33,813 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-21 11:17:33,816 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:33,816 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:33,816 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:33,816 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:33,816 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=79a7a6396c42f41507da8db17214c982, ASSIGN}, {pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=20662e4645c6626e1e36b48bf00b79b3, ASSIGN}, {pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c77e20d27514259a2f6abd58ecb8eb1, ASSIGN}, {pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4e2c99ee00b48b6a06cf7330d2a34d3f, ASSIGN}, {pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=39e7abb37b753473837a82599c2bd27c, ASSIGN}] 2023-07-21 11:17:33,819 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=39e7abb37b753473837a82599c2bd27c, ASSIGN 2023-07-21 11:17:33,819 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c77e20d27514259a2f6abd58ecb8eb1, ASSIGN 2023-07-21 11:17:33,819 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=20662e4645c6626e1e36b48bf00b79b3, ASSIGN 2023-07-21 11:17:33,819 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4e2c99ee00b48b6a06cf7330d2a34d3f, ASSIGN 2023-07-21 11:17:33,820 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=39e7abb37b753473837a82599c2bd27c, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46255,1689938224878; forceNewPlan=false, retain=false 2023-07-21 11:17:33,820 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=79a7a6396c42f41507da8db17214c982, ASSIGN 2023-07-21 11:17:33,820 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=20662e4645c6626e1e36b48bf00b79b3, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36863,1689938225106; forceNewPlan=false, retain=false 2023-07-21 11:17:33,820 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4e2c99ee00b48b6a06cf7330d2a34d3f, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36863,1689938225106; forceNewPlan=false, retain=false 2023-07-21 11:17:33,820 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c77e20d27514259a2f6abd58ecb8eb1, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46255,1689938224878; forceNewPlan=false, retain=false 2023-07-21 11:17:33,821 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=79a7a6396c42f41507da8db17214c982, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46255,1689938224878; forceNewPlan=false, retain=false 2023-07-21 11:17:33,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-21 11:17:33,970 INFO [jenkins-hbase17:40703] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 11:17:33,973 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=4e2c99ee00b48b6a06cf7330d2a34d3f, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:33,973 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=39e7abb37b753473837a82599c2bd27c, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:33,973 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=7c77e20d27514259a2f6abd58ecb8eb1, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:33,974 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938253973"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938253973"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938253973"}]},"ts":"1689938253973"} 2023-07-21 11:17:33,974 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938253973"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938253973"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938253973"}]},"ts":"1689938253973"} 2023-07-21 11:17:33,973 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=20662e4645c6626e1e36b48bf00b79b3, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:33,973 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=79a7a6396c42f41507da8db17214c982, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:33,974 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938253973"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938253973"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938253973"}]},"ts":"1689938253973"} 2023-07-21 11:17:33,974 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938253973"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938253973"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938253973"}]},"ts":"1689938253973"} 2023-07-21 11:17:33,974 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938253973"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938253973"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938253973"}]},"ts":"1689938253973"} 2023-07-21 11:17:33,975 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=134, state=RUNNABLE; OpenRegionProcedure 39e7abb37b753473837a82599c2bd27c, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:33,976 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=132, state=RUNNABLE; OpenRegionProcedure 7c77e20d27514259a2f6abd58ecb8eb1, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:33,979 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=131, state=RUNNABLE; OpenRegionProcedure 20662e4645c6626e1e36b48bf00b79b3, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:33,984 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=130, state=RUNNABLE; OpenRegionProcedure 79a7a6396c42f41507da8db17214c982, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:33,985 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=133, state=RUNNABLE; OpenRegionProcedure 4e2c99ee00b48b6a06cf7330d2a34d3f, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:34,137 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982. 2023-07-21 11:17:34,137 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 79a7a6396c42f41507da8db17214c982, NAME => 'Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 11:17:34,137 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 79a7a6396c42f41507da8db17214c982 2023-07-21 11:17:34,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:34,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 79a7a6396c42f41507da8db17214c982 2023-07-21 11:17:34,139 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 79a7a6396c42f41507da8db17214c982 2023-07-21 11:17:34,139 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f. 2023-07-21 11:17:34,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4e2c99ee00b48b6a06cf7330d2a34d3f, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 11:17:34,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 4e2c99ee00b48b6a06cf7330d2a34d3f 2023-07-21 11:17:34,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:34,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 4e2c99ee00b48b6a06cf7330d2a34d3f 2023-07-21 11:17:34,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 4e2c99ee00b48b6a06cf7330d2a34d3f 2023-07-21 11:17:34,157 INFO [StoreOpener-4e2c99ee00b48b6a06cf7330d2a34d3f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4e2c99ee00b48b6a06cf7330d2a34d3f 2023-07-21 11:17:34,159 DEBUG [StoreOpener-4e2c99ee00b48b6a06cf7330d2a34d3f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f/f 2023-07-21 11:17:34,159 DEBUG [StoreOpener-4e2c99ee00b48b6a06cf7330d2a34d3f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f/f 2023-07-21 11:17:34,160 INFO [StoreOpener-4e2c99ee00b48b6a06cf7330d2a34d3f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4e2c99ee00b48b6a06cf7330d2a34d3f columnFamilyName f 2023-07-21 11:17:34,161 INFO [StoreOpener-4e2c99ee00b48b6a06cf7330d2a34d3f-1] regionserver.HStore(310): Store=4e2c99ee00b48b6a06cf7330d2a34d3f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:34,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f 2023-07-21 11:17:34,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f 2023-07-21 11:17:34,164 INFO [StoreOpener-79a7a6396c42f41507da8db17214c982-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 79a7a6396c42f41507da8db17214c982 2023-07-21 11:17:34,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 4e2c99ee00b48b6a06cf7330d2a34d3f 2023-07-21 11:17:34,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:34,173 DEBUG [StoreOpener-79a7a6396c42f41507da8db17214c982-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982/f 2023-07-21 11:17:34,173 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 4e2c99ee00b48b6a06cf7330d2a34d3f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11074086720, jitterRate=0.031354695558547974}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:34,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 4e2c99ee00b48b6a06cf7330d2a34d3f: 2023-07-21 11:17:34,174 DEBUG [StoreOpener-79a7a6396c42f41507da8db17214c982-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982/f 2023-07-21 11:17:34,174 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f., pid=139, masterSystemTime=1689938254135 2023-07-21 11:17:34,175 INFO [StoreOpener-79a7a6396c42f41507da8db17214c982-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 79a7a6396c42f41507da8db17214c982 columnFamilyName f 2023-07-21 11:17:34,176 INFO [StoreOpener-79a7a6396c42f41507da8db17214c982-1] regionserver.HStore(310): Store=79a7a6396c42f41507da8db17214c982/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:34,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f. 2023-07-21 11:17:34,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982 2023-07-21 11:17:34,178 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f. 2023-07-21 11:17:34,178 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3. 2023-07-21 11:17:34,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 20662e4645c6626e1e36b48bf00b79b3, NAME => 'Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 11:17:34,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 20662e4645c6626e1e36b48bf00b79b3 2023-07-21 11:17:34,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:34,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 20662e4645c6626e1e36b48bf00b79b3 2023-07-21 11:17:34,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 20662e4645c6626e1e36b48bf00b79b3 2023-07-21 11:17:34,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982 2023-07-21 11:17:34,180 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=4e2c99ee00b48b6a06cf7330d2a34d3f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:34,180 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938254179"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938254179"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938254179"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938254179"}]},"ts":"1689938254179"} 2023-07-21 11:17:34,184 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=133 2023-07-21 11:17:34,184 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=133, state=SUCCESS; OpenRegionProcedure 4e2c99ee00b48b6a06cf7330d2a34d3f, server=jenkins-hbase17.apache.org,36863,1689938225106 in 197 msec 2023-07-21 11:17:34,186 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4e2c99ee00b48b6a06cf7330d2a34d3f, ASSIGN in 368 msec 2023-07-21 11:17:34,188 INFO [StoreOpener-20662e4645c6626e1e36b48bf00b79b3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 20662e4645c6626e1e36b48bf00b79b3 2023-07-21 11:17:34,195 DEBUG [StoreOpener-20662e4645c6626e1e36b48bf00b79b3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3/f 2023-07-21 11:17:34,195 DEBUG [StoreOpener-20662e4645c6626e1e36b48bf00b79b3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3/f 2023-07-21 11:17:34,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 79a7a6396c42f41507da8db17214c982 2023-07-21 11:17:34,196 INFO [StoreOpener-20662e4645c6626e1e36b48bf00b79b3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 20662e4645c6626e1e36b48bf00b79b3 columnFamilyName f 2023-07-21 11:17:34,201 INFO [StoreOpener-20662e4645c6626e1e36b48bf00b79b3-1] regionserver.HStore(310): Store=20662e4645c6626e1e36b48bf00b79b3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:34,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:34,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3 2023-07-21 11:17:34,209 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 79a7a6396c42f41507da8db17214c982; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10862360000, jitterRate=0.011636108160018921}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:34,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 79a7a6396c42f41507da8db17214c982: 2023-07-21 11:17:34,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3 2023-07-21 11:17:34,210 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982., pid=138, masterSystemTime=1689938254133 2023-07-21 11:17:34,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982. 2023-07-21 11:17:34,213 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982. 2023-07-21 11:17:34,213 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=79a7a6396c42f41507da8db17214c982, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:34,213 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c. 2023-07-21 11:17:34,213 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938254213"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938254213"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938254213"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938254213"}]},"ts":"1689938254213"} 2023-07-21 11:17:34,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 39e7abb37b753473837a82599c2bd27c, NAME => 'Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 11:17:34,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 39e7abb37b753473837a82599c2bd27c 2023-07-21 11:17:34,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:34,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 39e7abb37b753473837a82599c2bd27c 2023-07-21 11:17:34,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 39e7abb37b753473837a82599c2bd27c 2023-07-21 11:17:34,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 20662e4645c6626e1e36b48bf00b79b3 2023-07-21 11:17:34,220 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=130 2023-07-21 11:17:34,220 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=130, state=SUCCESS; OpenRegionProcedure 79a7a6396c42f41507da8db17214c982, server=jenkins-hbase17.apache.org,46255,1689938224878 in 232 msec 2023-07-21 11:17:34,223 INFO [StoreOpener-39e7abb37b753473837a82599c2bd27c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 39e7abb37b753473837a82599c2bd27c 2023-07-21 11:17:34,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:34,225 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 20662e4645c6626e1e36b48bf00b79b3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9888814240, jitterRate=-0.07903240621089935}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:34,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 20662e4645c6626e1e36b48bf00b79b3: 2023-07-21 11:17:34,225 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=79a7a6396c42f41507da8db17214c982, ASSIGN in 404 msec 2023-07-21 11:17:34,226 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3., pid=137, masterSystemTime=1689938254135 2023-07-21 11:17:34,226 DEBUG [StoreOpener-39e7abb37b753473837a82599c2bd27c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c/f 2023-07-21 11:17:34,226 DEBUG [StoreOpener-39e7abb37b753473837a82599c2bd27c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c/f 2023-07-21 11:17:34,227 INFO [StoreOpener-39e7abb37b753473837a82599c2bd27c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 39e7abb37b753473837a82599c2bd27c columnFamilyName f 2023-07-21 11:17:34,228 INFO [StoreOpener-39e7abb37b753473837a82599c2bd27c-1] regionserver.HStore(310): Store=39e7abb37b753473837a82599c2bd27c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:34,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3. 2023-07-21 11:17:34,228 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3. 2023-07-21 11:17:34,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c 2023-07-21 11:17:34,229 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=20662e4645c6626e1e36b48bf00b79b3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:34,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c 2023-07-21 11:17:34,229 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938254229"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938254229"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938254229"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938254229"}]},"ts":"1689938254229"} 2023-07-21 11:17:34,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 39e7abb37b753473837a82599c2bd27c 2023-07-21 11:17:34,232 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=131 2023-07-21 11:17:34,232 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=131, state=SUCCESS; OpenRegionProcedure 20662e4645c6626e1e36b48bf00b79b3, server=jenkins-hbase17.apache.org,36863,1689938225106 in 252 msec 2023-07-21 11:17:34,234 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=20662e4645c6626e1e36b48bf00b79b3, ASSIGN in 417 msec 2023-07-21 11:17:34,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:34,236 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 39e7abb37b753473837a82599c2bd27c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11497101600, jitterRate=0.07075102627277374}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:34,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 39e7abb37b753473837a82599c2bd27c: 2023-07-21 11:17:34,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c., pid=135, masterSystemTime=1689938254133 2023-07-21 11:17:34,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c. 2023-07-21 11:17:34,239 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c. 2023-07-21 11:17:34,239 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1. 2023-07-21 11:17:34,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7c77e20d27514259a2f6abd58ecb8eb1, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 11:17:34,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 7c77e20d27514259a2f6abd58ecb8eb1 2023-07-21 11:17:34,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:34,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 7c77e20d27514259a2f6abd58ecb8eb1 2023-07-21 11:17:34,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 7c77e20d27514259a2f6abd58ecb8eb1 2023-07-21 11:17:34,240 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=39e7abb37b753473837a82599c2bd27c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:34,240 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938254240"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938254240"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938254240"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938254240"}]},"ts":"1689938254240"} 2023-07-21 11:17:34,242 INFO [StoreOpener-7c77e20d27514259a2f6abd58ecb8eb1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7c77e20d27514259a2f6abd58ecb8eb1 2023-07-21 11:17:34,244 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=134 2023-07-21 11:17:34,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=134, state=SUCCESS; OpenRegionProcedure 39e7abb37b753473837a82599c2bd27c, server=jenkins-hbase17.apache.org,46255,1689938224878 in 267 msec 2023-07-21 11:17:34,246 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=39e7abb37b753473837a82599c2bd27c, ASSIGN in 428 msec 2023-07-21 11:17:34,247 DEBUG [StoreOpener-7c77e20d27514259a2f6abd58ecb8eb1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1/f 2023-07-21 11:17:34,247 DEBUG [StoreOpener-7c77e20d27514259a2f6abd58ecb8eb1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1/f 2023-07-21 11:17:34,247 INFO [StoreOpener-7c77e20d27514259a2f6abd58ecb8eb1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7c77e20d27514259a2f6abd58ecb8eb1 columnFamilyName f 2023-07-21 11:17:34,248 INFO [StoreOpener-7c77e20d27514259a2f6abd58ecb8eb1-1] regionserver.HStore(310): Store=7c77e20d27514259a2f6abd58ecb8eb1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:34,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1 2023-07-21 11:17:34,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1 2023-07-21 11:17:34,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-21 11:17:34,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 7c77e20d27514259a2f6abd58ecb8eb1 2023-07-21 11:17:34,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:34,278 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 7c77e20d27514259a2f6abd58ecb8eb1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11285699520, jitterRate=0.05106267333030701}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:34,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 7c77e20d27514259a2f6abd58ecb8eb1: 2023-07-21 11:17:34,279 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1., pid=136, masterSystemTime=1689938254133 2023-07-21 11:17:34,283 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1. 2023-07-21 11:17:34,283 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1. 2023-07-21 11:17:34,285 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=7c77e20d27514259a2f6abd58ecb8eb1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:34,285 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938254285"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938254285"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938254285"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938254285"}]},"ts":"1689938254285"} 2023-07-21 11:17:34,292 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=132 2023-07-21 11:17:34,293 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; OpenRegionProcedure 7c77e20d27514259a2f6abd58ecb8eb1, server=jenkins-hbase17.apache.org,46255,1689938224878 in 311 msec 2023-07-21 11:17:34,294 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=129 2023-07-21 11:17:34,294 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c77e20d27514259a2f6abd58ecb8eb1, ASSIGN in 477 msec 2023-07-21 11:17:34,295 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:34,295 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938254295"}]},"ts":"1689938254295"} 2023-07-21 11:17:34,297 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-21 11:17:34,300 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:34,303 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 658 msec 2023-07-21 11:17:34,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-21 11:17:34,759 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 129 completed 2023-07-21 11:17:34,759 DEBUG [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-21 11:17:34,760 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:34,764 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-21 11:17:34,765 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:34,765 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-21 11:17:34,765 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:34,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-21 11:17:34,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:34,783 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-21 11:17:34,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testDisabledTableMove 2023-07-21 11:17:34,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=140, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-21 11:17:34,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-21 11:17:34,789 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938254789"}]},"ts":"1689938254789"} 2023-07-21 11:17:34,791 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-21 11:17:34,792 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-21 11:17:34,794 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=79a7a6396c42f41507da8db17214c982, UNASSIGN}, {pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=20662e4645c6626e1e36b48bf00b79b3, UNASSIGN}, {pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c77e20d27514259a2f6abd58ecb8eb1, UNASSIGN}, {pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4e2c99ee00b48b6a06cf7330d2a34d3f, UNASSIGN}, {pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=39e7abb37b753473837a82599c2bd27c, UNASSIGN}] 2023-07-21 11:17:34,795 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=39e7abb37b753473837a82599c2bd27c, UNASSIGN 2023-07-21 11:17:34,796 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=39e7abb37b753473837a82599c2bd27c, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:34,796 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938254796"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938254796"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938254796"}]},"ts":"1689938254796"} 2023-07-21 11:17:34,798 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=145, state=RUNNABLE; CloseRegionProcedure 39e7abb37b753473837a82599c2bd27c, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:34,801 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4e2c99ee00b48b6a06cf7330d2a34d3f, UNASSIGN 2023-07-21 11:17:34,802 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=4e2c99ee00b48b6a06cf7330d2a34d3f, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:34,802 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938254802"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938254802"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938254802"}]},"ts":"1689938254802"} 2023-07-21 11:17:34,803 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=144, state=RUNNABLE; CloseRegionProcedure 4e2c99ee00b48b6a06cf7330d2a34d3f, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:34,806 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c77e20d27514259a2f6abd58ecb8eb1, UNASSIGN 2023-07-21 11:17:34,807 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=7c77e20d27514259a2f6abd58ecb8eb1, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:34,808 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938254807"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938254807"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938254807"}]},"ts":"1689938254807"} 2023-07-21 11:17:34,813 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=148, ppid=143, state=RUNNABLE; CloseRegionProcedure 7c77e20d27514259a2f6abd58ecb8eb1, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:34,816 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=79a7a6396c42f41507da8db17214c982, UNASSIGN 2023-07-21 11:17:34,816 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=20662e4645c6626e1e36b48bf00b79b3, UNASSIGN 2023-07-21 11:17:34,820 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=79a7a6396c42f41507da8db17214c982, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:34,821 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938254820"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938254820"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938254820"}]},"ts":"1689938254820"} 2023-07-21 11:17:34,821 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=20662e4645c6626e1e36b48bf00b79b3, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:34,821 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938254821"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938254821"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938254821"}]},"ts":"1689938254821"} 2023-07-21 11:17:34,828 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=142, state=RUNNABLE; CloseRegionProcedure 20662e4645c6626e1e36b48bf00b79b3, server=jenkins-hbase17.apache.org,36863,1689938225106}] 2023-07-21 11:17:34,829 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=141, state=RUNNABLE; CloseRegionProcedure 79a7a6396c42f41507da8db17214c982, server=jenkins-hbase17.apache.org,46255,1689938224878}] 2023-07-21 11:17:34,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-21 11:17:34,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 39e7abb37b753473837a82599c2bd27c 2023-07-21 11:17:34,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 39e7abb37b753473837a82599c2bd27c, disabling compactions & flushes 2023-07-21 11:17:34,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c. 2023-07-21 11:17:34,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c. 2023-07-21 11:17:34,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c. after waiting 0 ms 2023-07-21 11:17:34,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c. 2023-07-21 11:17:34,956 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 4e2c99ee00b48b6a06cf7330d2a34d3f 2023-07-21 11:17:34,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 4e2c99ee00b48b6a06cf7330d2a34d3f, disabling compactions & flushes 2023-07-21 11:17:34,965 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f. 2023-07-21 11:17:34,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f. 2023-07-21 11:17:34,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f. after waiting 0 ms 2023-07-21 11:17:34,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f. 2023-07-21 11:17:34,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:34,977 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c. 2023-07-21 11:17:34,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 39e7abb37b753473837a82599c2bd27c: 2023-07-21 11:17:34,978 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:34,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f. 2023-07-21 11:17:34,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 4e2c99ee00b48b6a06cf7330d2a34d3f: 2023-07-21 11:17:34,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 39e7abb37b753473837a82599c2bd27c 2023-07-21 11:17:34,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 7c77e20d27514259a2f6abd58ecb8eb1 2023-07-21 11:17:34,981 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=39e7abb37b753473837a82599c2bd27c, regionState=CLOSED 2023-07-21 11:17:34,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 7c77e20d27514259a2f6abd58ecb8eb1, disabling compactions & flushes 2023-07-21 11:17:34,981 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1. 2023-07-21 11:17:34,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1. 2023-07-21 11:17:34,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1. after waiting 0 ms 2023-07-21 11:17:34,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1. 2023-07-21 11:17:34,981 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938254981"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938254981"}]},"ts":"1689938254981"} 2023-07-21 11:17:34,984 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 4e2c99ee00b48b6a06cf7330d2a34d3f 2023-07-21 11:17:34,984 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 20662e4645c6626e1e36b48bf00b79b3 2023-07-21 11:17:34,984 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=4e2c99ee00b48b6a06cf7330d2a34d3f, regionState=CLOSED 2023-07-21 11:17:34,985 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938254984"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938254984"}]},"ts":"1689938254984"} 2023-07-21 11:17:34,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 20662e4645c6626e1e36b48bf00b79b3, disabling compactions & flushes 2023-07-21 11:17:34,986 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3. 2023-07-21 11:17:34,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3. 2023-07-21 11:17:34,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3. after waiting 0 ms 2023-07-21 11:17:34,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3. 2023-07-21 11:17:34,988 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=145 2023-07-21 11:17:34,988 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=145, state=SUCCESS; CloseRegionProcedure 39e7abb37b753473837a82599c2bd27c, server=jenkins-hbase17.apache.org,46255,1689938224878 in 186 msec 2023-07-21 11:17:34,992 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=39e7abb37b753473837a82599c2bd27c, UNASSIGN in 194 msec 2023-07-21 11:17:34,997 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=144 2023-07-21 11:17:34,997 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=144, state=SUCCESS; CloseRegionProcedure 4e2c99ee00b48b6a06cf7330d2a34d3f, server=jenkins-hbase17.apache.org,36863,1689938225106 in 185 msec 2023-07-21 11:17:34,998 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4e2c99ee00b48b6a06cf7330d2a34d3f, UNASSIGN in 203 msec 2023-07-21 11:17:35,009 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:35,011 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1. 2023-07-21 11:17:35,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 7c77e20d27514259a2f6abd58ecb8eb1: 2023-07-21 11:17:35,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:35,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3. 2023-07-21 11:17:35,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 20662e4645c6626e1e36b48bf00b79b3: 2023-07-21 11:17:35,017 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 7c77e20d27514259a2f6abd58ecb8eb1 2023-07-21 11:17:35,017 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 79a7a6396c42f41507da8db17214c982 2023-07-21 11:17:35,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 79a7a6396c42f41507da8db17214c982, disabling compactions & flushes 2023-07-21 11:17:35,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982. 2023-07-21 11:17:35,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982. 2023-07-21 11:17:35,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982. after waiting 0 ms 2023-07-21 11:17:35,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982. 2023-07-21 11:17:35,019 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=7c77e20d27514259a2f6abd58ecb8eb1, regionState=CLOSED 2023-07-21 11:17:35,019 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938255018"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938255018"}]},"ts":"1689938255018"} 2023-07-21 11:17:35,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 20662e4645c6626e1e36b48bf00b79b3 2023-07-21 11:17:35,022 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=20662e4645c6626e1e36b48bf00b79b3, regionState=CLOSED 2023-07-21 11:17:35,022 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938255022"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938255022"}]},"ts":"1689938255022"} 2023-07-21 11:17:35,025 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=143 2023-07-21 11:17:35,026 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; CloseRegionProcedure 7c77e20d27514259a2f6abd58ecb8eb1, server=jenkins-hbase17.apache.org,46255,1689938224878 in 207 msec 2023-07-21 11:17:35,027 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=142 2023-07-21 11:17:35,027 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=142, state=SUCCESS; CloseRegionProcedure 20662e4645c6626e1e36b48bf00b79b3, server=jenkins-hbase17.apache.org,36863,1689938225106 in 196 msec 2023-07-21 11:17:35,028 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c77e20d27514259a2f6abd58ecb8eb1, UNASSIGN in 232 msec 2023-07-21 11:17:35,030 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=20662e4645c6626e1e36b48bf00b79b3, UNASSIGN in 233 msec 2023-07-21 11:17:35,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:35,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982. 2023-07-21 11:17:35,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 79a7a6396c42f41507da8db17214c982: 2023-07-21 11:17:35,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 79a7a6396c42f41507da8db17214c982 2023-07-21 11:17:35,038 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=79a7a6396c42f41507da8db17214c982, regionState=CLOSED 2023-07-21 11:17:35,038 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938255038"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938255038"}]},"ts":"1689938255038"} 2023-07-21 11:17:35,043 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=141 2023-07-21 11:17:35,043 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=141, state=SUCCESS; CloseRegionProcedure 79a7a6396c42f41507da8db17214c982, server=jenkins-hbase17.apache.org,46255,1689938224878 in 211 msec 2023-07-21 11:17:35,048 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=140 2023-07-21 11:17:35,048 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=79a7a6396c42f41507da8db17214c982, UNASSIGN in 249 msec 2023-07-21 11:17:35,049 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938255049"}]},"ts":"1689938255049"} 2023-07-21 11:17:35,050 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-21 11:17:35,051 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-21 11:17:35,056 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=140, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 270 msec 2023-07-21 11:17:35,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-21 11:17:35,101 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 140 completed 2023-07-21 11:17:35,101 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1426909851 2023-07-21 11:17:35,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1426909851 2023-07-21 11:17:35,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1426909851 2023-07-21 11:17:35,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:35,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:35,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:35,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-21 11:17:35,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1426909851, current retry=0 2023-07-21 11:17:35,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1426909851. 2023-07-21 11:17:35,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:35,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:35,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:35,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-21 11:17:35,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:35,117 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-21 11:17:35,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testDisabledTableMove 2023-07-21 11:17:35,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:35,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 928 service: MasterService methodName: DisableTable size: 89 connection: 136.243.18.41:42872 deadline: 1689938315117, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-21 11:17:35,119 DEBUG [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-21 11:17:35,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testDisabledTableMove 2023-07-21 11:17:35,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] procedure2.ProcedureExecutor(1029): Stored pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 11:17:35,123 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 11:17:35,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1426909851' 2023-07-21 11:17:35,124 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=152, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 11:17:35,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1426909851 2023-07-21 11:17:35,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:35,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:35,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:35,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-21 11:17:35,132 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982 2023-07-21 11:17:35,132 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3 2023-07-21 11:17:35,132 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c 2023-07-21 11:17:35,132 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f 2023-07-21 11:17:35,132 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1 2023-07-21 11:17:35,139 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3/recovered.edits] 2023-07-21 11:17:35,139 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1/recovered.edits] 2023-07-21 11:17:35,139 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c/recovered.edits] 2023-07-21 11:17:35,139 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982/recovered.edits] 2023-07-21 11:17:35,139 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f/f, FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f/recovered.edits] 2023-07-21 11:17:35,146 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3/recovered.edits/4.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3/recovered.edits/4.seqid 2023-07-21 11:17:35,147 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c/recovered.edits/4.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c/recovered.edits/4.seqid 2023-07-21 11:17:35,147 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1/recovered.edits/4.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1/recovered.edits/4.seqid 2023-07-21 11:17:35,147 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f/recovered.edits/4.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f/recovered.edits/4.seqid 2023-07-21 11:17:35,148 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/20662e4645c6626e1e36b48bf00b79b3 2023-07-21 11:17:35,148 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982/recovered.edits/4.seqid to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/archive/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982/recovered.edits/4.seqid 2023-07-21 11:17:35,149 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/39e7abb37b753473837a82599c2bd27c 2023-07-21 11:17:35,149 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/4e2c99ee00b48b6a06cf7330d2a34d3f 2023-07-21 11:17:35,149 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/7c77e20d27514259a2f6abd58ecb8eb1 2023-07-21 11:17:35,149 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/.tmp/data/default/Group_testDisabledTableMove/79a7a6396c42f41507da8db17214c982 2023-07-21 11:17:35,149 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-21 11:17:35,152 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=152, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 11:17:35,154 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-21 11:17:35,159 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-21 11:17:35,160 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=152, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 11:17:35,160 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-21 11:17:35,160 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938255160"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:35,160 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938255160"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:35,161 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938255160"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:35,161 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938255160"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:35,161 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938255160"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:35,163 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 11:17:35,163 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 79a7a6396c42f41507da8db17214c982, NAME => 'Group_testDisabledTableMove,,1689938253641.79a7a6396c42f41507da8db17214c982.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 20662e4645c6626e1e36b48bf00b79b3, NAME => 'Group_testDisabledTableMove,aaaaa,1689938253641.20662e4645c6626e1e36b48bf00b79b3.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 7c77e20d27514259a2f6abd58ecb8eb1, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689938253641.7c77e20d27514259a2f6abd58ecb8eb1.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 4e2c99ee00b48b6a06cf7330d2a34d3f, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689938253641.4e2c99ee00b48b6a06cf7330d2a34d3f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 39e7abb37b753473837a82599c2bd27c, NAME => 'Group_testDisabledTableMove,zzzzz,1689938253641.39e7abb37b753473837a82599c2bd27c.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 11:17:35,163 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-21 11:17:35,163 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938255163"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:35,165 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-21 11:17:35,166 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=152, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 11:17:35,167 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 46 msec 2023-07-21 11:17:35,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-21 11:17:35,233 INFO [Listener at localhost.localdomain/38409] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 152 completed 2023-07-21 11:17:35,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:35,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:35,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:35,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:35,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:35,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009] to rsgroup default 2023-07-21 11:17:35,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1426909851 2023-07-21 11:17:35,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:35,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:35,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:35,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1426909851, current retry=0 2023-07-21 11:17:35,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33011,1689938225358, jenkins-hbase17.apache.org,35009,1689938231406] are moved back to Group_testDisabledTableMove_1426909851 2023-07-21 11:17:35,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1426909851 => default 2023-07-21 11:17:35,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:35,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_testDisabledTableMove_1426909851 2023-07-21 11:17:35,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:35,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:35,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 11:17:35,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:35,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:35,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:35,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:35,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:35,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:35,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:35,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:35,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:35,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:35,255 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:35,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:35,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:35,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:35,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:35,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:35,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:35,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:35,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:35,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:35,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 962 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939455264, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:35,265 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:35,266 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:35,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:35,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:35,267 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:35,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:35,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:35,285 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=502 (was 501) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2096469048_17 at /127.0.0.1:50170 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc2f4991-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x595febc4-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=778 (was 746) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=649 (was 644) - SystemLoadAverage LEAK? -, ProcessCount=187 (was 184) - ProcessCount LEAK? -, AvailableMemoryMB=3338 (was 3441) 2023-07-21 11:17:35,285 WARN [Listener at localhost.localdomain/38409] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-21 11:17:35,305 INFO [Listener at localhost.localdomain/38409] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=502, OpenFileDescriptor=778, MaxFileDescriptor=60000, SystemLoadAverage=649, ProcessCount=185, AvailableMemoryMB=3338 2023-07-21 11:17:35,305 WARN [Listener at localhost.localdomain/38409] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-21 11:17:35,305 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-21 11:17:35,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:35,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:35,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:35,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:35,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:35,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:35,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:35,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:35,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:35,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:35,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:35,317 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:35,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:35,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:35,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:35,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:35,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:35,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:35,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:35,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:40703] to rsgroup master 2023-07-21 11:17:35,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:35,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] ipc.CallRunner(144): callId: 990 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:42872 deadline: 1689939455326, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. 2023-07-21 11:17:35,327 WARN [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:40703 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:35,328 INFO [Listener at localhost.localdomain/38409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:35,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:35,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:35,329 INFO [Listener at localhost.localdomain/38409] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33011, jenkins-hbase17.apache.org:35009, jenkins-hbase17.apache.org:36863, jenkins-hbase17.apache.org:46255], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:35,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:35,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40703] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:35,330 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 11:17:35,330 INFO [Listener at localhost.localdomain/38409] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 11:17:35,331 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5d8a744a to 127.0.0.1:63555 2023-07-21 11:17:35,331 DEBUG [Listener at localhost.localdomain/38409] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:35,333 DEBUG [Listener at localhost.localdomain/38409] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 11:17:35,333 DEBUG [Listener at localhost.localdomain/38409] util.JVMClusterUtil(257): Found active master hash=1498934588, stopped=false 2023-07-21 11:17:35,333 DEBUG [Listener at localhost.localdomain/38409] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 11:17:35,333 DEBUG [Listener at localhost.localdomain/38409] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 11:17:35,333 INFO [Listener at localhost.localdomain/38409] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,40703,1689938222766 2023-07-21 11:17:35,334 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:35009-0x101879855f5000b, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:35,334 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:35,334 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:35,334 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:35,335 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:35,335 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:35,335 INFO [Listener at localhost.localdomain/38409] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 11:17:35,335 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:35,335 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35009-0x101879855f5000b, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:35,335 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:35,335 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:35,335 DEBUG [Listener at localhost.localdomain/38409] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2e0f64c4 to 127.0.0.1:63555 2023-07-21 11:17:35,335 DEBUG [Listener at localhost.localdomain/38409] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:35,335 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:35,336 INFO [Listener at localhost.localdomain/38409] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,46255,1689938224878' ***** 2023-07-21 11:17:35,336 INFO [Listener at localhost.localdomain/38409] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:17:35,336 INFO [Listener at localhost.localdomain/38409] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,36863,1689938225106' ***** 2023-07-21 11:17:35,337 INFO [Listener at localhost.localdomain/38409] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:17:35,336 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:35,337 INFO [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:35,337 INFO [Listener at localhost.localdomain/38409] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,33011,1689938225358' ***** 2023-07-21 11:17:35,337 INFO [Listener at localhost.localdomain/38409] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:17:35,337 INFO [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:35,338 INFO [Listener at localhost.localdomain/38409] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,35009,1689938231406' ***** 2023-07-21 11:17:35,338 INFO [Listener at localhost.localdomain/38409] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:17:35,339 INFO [RS:3;jenkins-hbase17:35009] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:35,341 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:35,343 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:35,343 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:17:35,349 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:17:35,349 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:17:35,351 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:17:35,352 INFO [RS:0;jenkins-hbase17:46255] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1f5e424d{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:35,352 INFO [RS:3;jenkins-hbase17:35009] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@37afa654{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:35,352 INFO [RS:1;jenkins-hbase17:36863] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@64a22c9a{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:35,352 INFO [RS:2;jenkins-hbase17:33011] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@f6af39f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:35,352 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:35,356 INFO [RS:2;jenkins-hbase17:33011] server.AbstractConnector(383): Stopped ServerConnector@65eae3e8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:35,356 INFO [RS:0;jenkins-hbase17:46255] server.AbstractConnector(383): Stopped ServerConnector@70ea26d2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:35,356 INFO [RS:2;jenkins-hbase17:33011] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:35,356 INFO [RS:3;jenkins-hbase17:35009] server.AbstractConnector(383): Stopped ServerConnector@12b776cf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:35,356 INFO [RS:0;jenkins-hbase17:46255] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:35,356 INFO [RS:1;jenkins-hbase17:36863] server.AbstractConnector(383): Stopped ServerConnector@53259b85{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:35,357 INFO [RS:2;jenkins-hbase17:33011] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@12e46771{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:35,356 INFO [RS:3;jenkins-hbase17:35009] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:35,357 INFO [RS:1;jenkins-hbase17:36863] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:35,357 INFO [RS:0;jenkins-hbase17:46255] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6fecfa89{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:35,358 INFO [RS:2;jenkins-hbase17:33011] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4050af2a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:35,360 INFO [RS:1;jenkins-hbase17:36863] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3dbeab3b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:35,360 INFO [RS:0;jenkins-hbase17:46255] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@315670d7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:35,360 INFO [RS:3;jenkins-hbase17:35009] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b7983ad{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:35,360 INFO [RS:1;jenkins-hbase17:36863] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6f188e8d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:35,361 INFO [RS:3;jenkins-hbase17:35009] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4efade12{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:35,363 INFO [RS:2;jenkins-hbase17:33011] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:17:35,363 INFO [RS:1;jenkins-hbase17:36863] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:17:35,363 INFO [RS:2;jenkins-hbase17:33011] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:17:35,363 INFO [RS:1;jenkins-hbase17:36863] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:17:35,363 INFO [RS:1;jenkins-hbase17:36863] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:17:35,364 INFO [RS:0;jenkins-hbase17:46255] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:17:35,364 INFO [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(3305): Received CLOSE for 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:35,364 INFO [RS:0;jenkins-hbase17:46255] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:17:35,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 7155e278007d2c2a97378c786865c2c6, disabling compactions & flushes 2023-07-21 11:17:35,363 INFO [RS:2;jenkins-hbase17:33011] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:17:35,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:35,365 INFO [RS:0;jenkins-hbase17:46255] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:17:35,364 INFO [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:35,365 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(3305): Received CLOSE for 0d251b6fcd6df4af958f1fccdfdc34e4 2023-07-21 11:17:35,366 DEBUG [RS:1;jenkins-hbase17:36863] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3e121752 to 127.0.0.1:63555 2023-07-21 11:17:35,366 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(3305): Received CLOSE for 6c58d1ae91a12fe87aa9927da34b36d2 2023-07-21 11:17:35,364 INFO [RS:3;jenkins-hbase17:35009] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:17:35,366 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(3305): Received CLOSE for dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:35,366 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 0d251b6fcd6df4af958f1fccdfdc34e4, disabling compactions & flushes 2023-07-21 11:17:35,366 DEBUG [RS:1;jenkins-hbase17:36863] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:35,366 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:35,366 INFO [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 11:17:35,366 DEBUG [RS:0;jenkins-hbase17:46255] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x34e7ffc8 to 127.0.0.1:63555 2023-07-21 11:17:35,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:35,366 DEBUG [RS:0;jenkins-hbase17:46255] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:35,365 INFO [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:35,366 INFO [RS:0;jenkins-hbase17:46255] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:17:35,366 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. after waiting 0 ms 2023-07-21 11:17:35,367 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:35,366 INFO [RS:0;jenkins-hbase17:46255] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:17:35,367 INFO [RS:0;jenkins-hbase17:46255] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:17:35,367 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 11:17:35,366 DEBUG [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1478): Online Regions={7155e278007d2c2a97378c786865c2c6=testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6.} 2023-07-21 11:17:35,366 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:35,366 INFO [RS:3;jenkins-hbase17:35009] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:17:35,366 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:35,367 INFO [RS:3;jenkins-hbase17:35009] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:17:35,367 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:35,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. after waiting 0 ms 2023-07-21 11:17:35,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:35,368 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-21 11:17:35,368 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 0d251b6fcd6df4af958f1fccdfdc34e4 1/1 column families, dataSize=22.37 KB heapSize=36.89 KB 2023-07-21 11:17:35,368 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 11:17:35,368 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 11:17:35,368 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 11:17:35,368 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 11:17:35,368 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 11:17:35,368 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=79.51 KB heapSize=125.46 KB 2023-07-21 11:17:35,366 DEBUG [RS:2;jenkins-hbase17:33011] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x221ebde1 to 127.0.0.1:63555 2023-07-21 11:17:35,368 DEBUG [RS:2;jenkins-hbase17:33011] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:35,369 INFO [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,33011,1689938225358; all regions closed. 2023-07-21 11:17:35,368 DEBUG [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1478): Online Regions={0d251b6fcd6df4af958f1fccdfdc34e4=hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4., 1588230740=hbase:meta,,1.1588230740, 6c58d1ae91a12fe87aa9927da34b36d2=hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2., dac93182b0e7c37b865b422b78986437=unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437.} 2023-07-21 11:17:35,368 DEBUG [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1504): Waiting on 7155e278007d2c2a97378c786865c2c6 2023-07-21 11:17:35,368 INFO [RS:3;jenkins-hbase17:35009] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:35,370 DEBUG [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1504): Waiting on 0d251b6fcd6df4af958f1fccdfdc34e4, 1588230740, 6c58d1ae91a12fe87aa9927da34b36d2, dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:35,370 DEBUG [RS:3;jenkins-hbase17:35009] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x09c2e566 to 127.0.0.1:63555 2023-07-21 11:17:35,370 DEBUG [RS:3;jenkins-hbase17:35009] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:35,370 INFO [RS:3;jenkins-hbase17:35009] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,35009,1689938231406; all regions closed. 2023-07-21 11:17:35,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/testRename/7155e278007d2c2a97378c786865c2c6/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 11:17:35,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:35,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 7155e278007d2c2a97378c786865c2c6: 2023-07-21 11:17:35,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689938247899.7155e278007d2c2a97378c786865c2c6. 2023-07-21 11:17:35,407 DEBUG [RS:3;jenkins-hbase17:35009] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/oldWALs 2023-07-21 11:17:35,407 INFO [RS:3;jenkins-hbase17:35009] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C35009%2C1689938231406:(num 1689938231887) 2023-07-21 11:17:35,407 DEBUG [RS:3;jenkins-hbase17:35009] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:35,407 INFO [RS:3;jenkins-hbase17:35009] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:35,407 INFO [RS:3;jenkins-hbase17:35009] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:17:35,408 INFO [RS:3;jenkins-hbase17:35009] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:17:35,408 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:35,408 INFO [RS:3;jenkins-hbase17:35009] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:17:35,408 INFO [RS:3;jenkins-hbase17:35009] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:17:35,411 INFO [RS:3;jenkins-hbase17:35009] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:35009 2023-07-21 11:17:35,418 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:35,418 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:35,418 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:35,418 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:35,418 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:35,418 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:35,418 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:35,418 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:35009-0x101879855f5000b, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,35009,1689938231406 2023-07-21 11:17:35,418 DEBUG [RS:2;jenkins-hbase17:33011] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/oldWALs 2023-07-21 11:17:35,418 INFO [RS:2;jenkins-hbase17:33011] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C33011%2C1689938225358:(num 1689938228806) 2023-07-21 11:17:35,418 DEBUG [RS:2;jenkins-hbase17:33011] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:35,418 INFO [RS:2;jenkins-hbase17:33011] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:35,419 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,35009,1689938231406] 2023-07-21 11:17:35,418 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:35009-0x101879855f5000b, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:35,419 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,35009,1689938231406; numProcessing=1 2023-07-21 11:17:35,420 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,35009,1689938231406 already deleted, retry=false 2023-07-21 11:17:35,420 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,35009,1689938231406 expired; onlineServers=3 2023-07-21 11:17:35,424 INFO [RS:2;jenkins-hbase17:33011] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:17:35,437 INFO [RS:2;jenkins-hbase17:33011] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:17:35,437 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:35,437 INFO [RS:2;jenkins-hbase17:33011] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:17:35,438 INFO [RS:2;jenkins-hbase17:33011] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:17:35,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.37 KB at sequenceid=107 (bloomFilter=true), to=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/.tmp/m/b24fd4af198c45bcb56e3d0d0e8385b1 2023-07-21 11:17:35,439 INFO [RS:2;jenkins-hbase17:33011] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:33011 2023-07-21 11:17:35,445 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:35,445 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:35,445 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33011,1689938225358 2023-07-21 11:17:35,445 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:35,446 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,33011,1689938225358] 2023-07-21 11:17:35,446 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,33011,1689938225358; numProcessing=2 2023-07-21 11:17:35,447 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,33011,1689938225358 already deleted, retry=false 2023-07-21 11:17:35,447 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,33011,1689938225358 expired; onlineServers=2 2023-07-21 11:17:35,460 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b24fd4af198c45bcb56e3d0d0e8385b1 2023-07-21 11:17:35,461 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=73.52 KB at sequenceid=204 (bloomFilter=false), to=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/.tmp/info/eee9636170f64ddf9bd09c819dfe7dff 2023-07-21 11:17:35,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/.tmp/m/b24fd4af198c45bcb56e3d0d0e8385b1 as hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m/b24fd4af198c45bcb56e3d0d0e8385b1 2023-07-21 11:17:35,467 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eee9636170f64ddf9bd09c819dfe7dff 2023-07-21 11:17:35,467 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b24fd4af198c45bcb56e3d0d0e8385b1 2023-07-21 11:17:35,468 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/m/b24fd4af198c45bcb56e3d0d0e8385b1, entries=22, sequenceid=107, filesize=5.9 K 2023-07-21 11:17:35,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.37 KB/22907, heapSize ~36.88 KB/37760, currentSize=0 B/0 for 0d251b6fcd6df4af958f1fccdfdc34e4 in 101ms, sequenceid=107, compaction requested=true 2023-07-21 11:17:35,469 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 11:17:35,512 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/rsgroup/0d251b6fcd6df4af958f1fccdfdc34e4/recovered.edits/110.seqid, newMaxSeqId=110, maxSeqId=35 2023-07-21 11:17:35,513 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:17:35,514 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:35,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 0d251b6fcd6df4af958f1fccdfdc34e4: 2023-07-21 11:17:35,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689938230370.0d251b6fcd6df4af958f1fccdfdc34e4. 2023-07-21 11:17:35,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 6c58d1ae91a12fe87aa9927da34b36d2, disabling compactions & flushes 2023-07-21 11:17:35,514 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. 2023-07-21 11:17:35,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. 2023-07-21 11:17:35,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. after waiting 0 ms 2023-07-21 11:17:35,515 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. 2023-07-21 11:17:35,515 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 6c58d1ae91a12fe87aa9927da34b36d2 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-21 11:17:35,516 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=204 (bloomFilter=false), to=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/.tmp/rep_barrier/bed5c51e720149d8a5d727f02ef151ff 2023-07-21 11:17:35,529 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bed5c51e720149d8a5d727f02ef151ff 2023-07-21 11:17:35,533 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:35009-0x101879855f5000b, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:35,533 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:35009-0x101879855f5000b, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:35,541 INFO [RS:3;jenkins-hbase17:35009] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,35009,1689938231406; zookeeper connection closed. 2023-07-21 11:17:35,543 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@17add84f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@17add84f 2023-07-21 11:17:35,570 INFO [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,36863,1689938225106; all regions closed. 2023-07-21 11:17:35,570 DEBUG [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1504): Waiting on 1588230740, 6c58d1ae91a12fe87aa9927da34b36d2, dac93182b0e7c37b865b422b78986437 2023-07-21 11:17:35,596 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 11:17:35,596 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 11:17:35,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/namespace/6c58d1ae91a12fe87aa9927da34b36d2/.tmp/info/f049f1c8327641c599cdec9fd40d3e7f 2023-07-21 11:17:35,611 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/WALs/jenkins-hbase17.apache.org,36863,1689938225106/jenkins-hbase17.apache.org%2C36863%2C1689938225106.1689938228817 not finished, retry = 0 2023-07-21 11:17:35,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/namespace/6c58d1ae91a12fe87aa9927da34b36d2/.tmp/info/f049f1c8327641c599cdec9fd40d3e7f as hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/namespace/6c58d1ae91a12fe87aa9927da34b36d2/info/f049f1c8327641c599cdec9fd40d3e7f 2023-07-21 11:17:35,621 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.99 KB at sequenceid=204 (bloomFilter=false), to=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/.tmp/table/3003a18e127045bc980986f3499d6dbb 2023-07-21 11:17:35,632 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/namespace/6c58d1ae91a12fe87aa9927da34b36d2/info/f049f1c8327641c599cdec9fd40d3e7f, entries=2, sequenceid=6, filesize=4.8 K 2023-07-21 11:17:35,632 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3003a18e127045bc980986f3499d6dbb 2023-07-21 11:17:35,635 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/.tmp/info/eee9636170f64ddf9bd09c819dfe7dff as hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/info/eee9636170f64ddf9bd09c819dfe7dff 2023-07-21 11:17:35,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 6c58d1ae91a12fe87aa9927da34b36d2 in 121ms, sequenceid=6, compaction requested=false 2023-07-21 11:17:35,645 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 11:17:35,645 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 11:17:35,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/namespace/6c58d1ae91a12fe87aa9927da34b36d2/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-21 11:17:35,661 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. 2023-07-21 11:17:35,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 6c58d1ae91a12fe87aa9927da34b36d2: 2023-07-21 11:17:35,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689938229888.6c58d1ae91a12fe87aa9927da34b36d2. 2023-07-21 11:17:35,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing dac93182b0e7c37b865b422b78986437, disabling compactions & flushes 2023-07-21 11:17:35,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:35,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:35,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. after waiting 0 ms 2023-07-21 11:17:35,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:35,664 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eee9636170f64ddf9bd09c819dfe7dff 2023-07-21 11:17:35,664 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/info/eee9636170f64ddf9bd09c819dfe7dff, entries=100, sequenceid=204, filesize=16.3 K 2023-07-21 11:17:35,667 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/.tmp/rep_barrier/bed5c51e720149d8a5d727f02ef151ff as hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/rep_barrier/bed5c51e720149d8a5d727f02ef151ff 2023-07-21 11:17:35,668 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/default/unmovedTable/dac93182b0e7c37b865b422b78986437/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 11:17:35,669 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:35,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for dac93182b0e7c37b865b422b78986437: 2023-07-21 11:17:35,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689938249558.dac93182b0e7c37b865b422b78986437. 2023-07-21 11:17:35,676 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bed5c51e720149d8a5d727f02ef151ff 2023-07-21 11:17:35,676 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/rep_barrier/bed5c51e720149d8a5d727f02ef151ff, entries=18, sequenceid=204, filesize=6.9 K 2023-07-21 11:17:35,678 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/.tmp/table/3003a18e127045bc980986f3499d6dbb as hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/table/3003a18e127045bc980986f3499d6dbb 2023-07-21 11:17:35,696 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3003a18e127045bc980986f3499d6dbb 2023-07-21 11:17:35,696 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/table/3003a18e127045bc980986f3499d6dbb, entries=31, sequenceid=204, filesize=7.4 K 2023-07-21 11:17:35,697 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~79.51 KB/81416, heapSize ~125.41 KB/128424, currentSize=0 B/0 for 1588230740 in 329ms, sequenceid=204, compaction requested=false 2023-07-21 11:17:35,710 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/data/hbase/meta/1588230740/recovered.edits/207.seqid, newMaxSeqId=207, maxSeqId=1 2023-07-21 11:17:35,710 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:17:35,711 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 11:17:35,711 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 11:17:35,712 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 11:17:35,716 DEBUG [RS:1;jenkins-hbase17:36863] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/oldWALs 2023-07-21 11:17:35,716 INFO [RS:1;jenkins-hbase17:36863] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C36863%2C1689938225106:(num 1689938228817) 2023-07-21 11:17:35,716 DEBUG [RS:1;jenkins-hbase17:36863] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:35,716 INFO [RS:1;jenkins-hbase17:36863] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:35,716 INFO [RS:1;jenkins-hbase17:36863] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 11:17:35,716 INFO [RS:1;jenkins-hbase17:36863] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:17:35,717 INFO [RS:1;jenkins-hbase17:36863] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:17:35,717 INFO [RS:1;jenkins-hbase17:36863] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:17:35,717 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:35,721 INFO [RS:1;jenkins-hbase17:36863] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:36863 2023-07-21 11:17:35,723 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:35,724 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:35,724 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36863,1689938225106 2023-07-21 11:17:35,728 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,36863,1689938225106] 2023-07-21 11:17:35,728 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,36863,1689938225106; numProcessing=3 2023-07-21 11:17:35,729 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,36863,1689938225106 already deleted, retry=false 2023-07-21 11:17:35,729 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,36863,1689938225106 expired; onlineServers=1 2023-07-21 11:17:35,770 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,46255,1689938224878; all regions closed. 2023-07-21 11:17:35,782 DEBUG [RS:0;jenkins-hbase17:46255] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/oldWALs 2023-07-21 11:17:35,783 INFO [RS:0;jenkins-hbase17:46255] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C46255%2C1689938224878.meta:.meta(num 1689938229328) 2023-07-21 11:17:35,800 DEBUG [RS:0;jenkins-hbase17:46255] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/oldWALs 2023-07-21 11:17:35,801 INFO [RS:0;jenkins-hbase17:46255] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C46255%2C1689938224878:(num 1689938228806) 2023-07-21 11:17:35,801 DEBUG [RS:0;jenkins-hbase17:46255] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:35,801 INFO [RS:0;jenkins-hbase17:46255] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:35,801 INFO [RS:0;jenkins-hbase17:46255] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:17:35,802 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:35,803 INFO [RS:0;jenkins-hbase17:46255] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:46255 2023-07-21 11:17:35,806 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,46255,1689938224878 2023-07-21 11:17:35,806 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:35,807 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,46255,1689938224878] 2023-07-21 11:17:35,807 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,46255,1689938224878; numProcessing=4 2023-07-21 11:17:35,808 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,46255,1689938224878 already deleted, retry=false 2023-07-21 11:17:35,808 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,46255,1689938224878 expired; onlineServers=0 2023-07-21 11:17:35,808 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,40703,1689938222766' ***** 2023-07-21 11:17:35,808 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 11:17:35,808 DEBUG [M:0;jenkins-hbase17:40703] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@186ae707, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:35,808 INFO [M:0;jenkins-hbase17:40703] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:35,812 INFO [M:0;jenkins-hbase17:40703] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@53796997{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 11:17:35,813 INFO [M:0;jenkins-hbase17:40703] server.AbstractConnector(383): Stopped ServerConnector@695e75c2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:35,813 INFO [M:0;jenkins-hbase17:40703] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:35,814 INFO [M:0;jenkins-hbase17:40703] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@47667999{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:35,815 INFO [M:0;jenkins-hbase17:40703] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2014df18{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:35,815 INFO [M:0;jenkins-hbase17:40703] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,40703,1689938222766 2023-07-21 11:17:35,815 INFO [M:0;jenkins-hbase17:40703] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,40703,1689938222766; all regions closed. 2023-07-21 11:17:35,816 DEBUG [M:0;jenkins-hbase17:40703] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:35,816 INFO [M:0;jenkins-hbase17:40703] master.HMaster(1491): Stopping master jetty server 2023-07-21 11:17:35,816 INFO [M:0;jenkins-hbase17:40703] server.AbstractConnector(383): Stopped ServerConnector@27051488{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:35,817 DEBUG [M:0;jenkins-hbase17:40703] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 11:17:35,817 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 11:17:35,817 DEBUG [M:0;jenkins-hbase17:40703] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 11:17:35,817 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938228153] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938228153,5,FailOnTimeoutGroup] 2023-07-21 11:17:35,817 INFO [M:0;jenkins-hbase17:40703] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 11:17:35,817 INFO [M:0;jenkins-hbase17:40703] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 11:17:35,818 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938228153] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938228153,5,FailOnTimeoutGroup] 2023-07-21 11:17:35,818 INFO [M:0;jenkins-hbase17:40703] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-07-21 11:17:35,818 DEBUG [M:0;jenkins-hbase17:40703] master.HMaster(1512): Stopping service threads 2023-07-21 11:17:35,818 INFO [M:0;jenkins-hbase17:40703] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 11:17:35,819 ERROR [M:0;jenkins-hbase17:40703] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-21 11:17:35,821 INFO [M:0;jenkins-hbase17:40703] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 11:17:35,821 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 11:17:35,847 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 11:17:35,907 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:35,907 INFO [RS:0;jenkins-hbase17:46255] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,46255,1689938224878; zookeeper connection closed. 2023-07-21 11:17:35,908 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:46255-0x101879855f50001, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:35,908 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@121809f5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@121809f5 2023-07-21 11:17:35,909 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:35,909 INFO [M:0;jenkins-hbase17:40703] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 11:17:35,909 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:35,909 INFO [M:0;jenkins-hbase17:40703] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 11:17:35,909 DEBUG [M:0;jenkins-hbase17:40703] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 11:17:35,909 INFO [M:0;jenkins-hbase17:40703] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:35,909 DEBUG [M:0;jenkins-hbase17:40703] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:35,909 DEBUG [M:0;jenkins-hbase17:40703] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 11:17:35,909 DEBUG [M:0;jenkins-hbase17:40703] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:35,909 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/master already deleted, retry=false 2023-07-21 11:17:35,909 INFO [M:0;jenkins-hbase17:40703] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=510.69 KB heapSize=610.95 KB 2023-07-21 11:17:35,909 DEBUG [RegionServerTracker-0] master.ActiveMasterManager(335): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Failed delete of our master address node; KeeperErrorCode = NoNode for /hbase/master 2023-07-21 11:17:35,909 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:35,944 INFO [M:0;jenkins-hbase17:40703] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=510.69 KB at sequenceid=1128 (bloomFilter=true), to=hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7a7f44f931c3449aad0b48a0e2a717f3 2023-07-21 11:17:35,950 DEBUG [M:0;jenkins-hbase17:40703] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7a7f44f931c3449aad0b48a0e2a717f3 as hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7a7f44f931c3449aad0b48a0e2a717f3 2023-07-21 11:17:35,957 INFO [M:0;jenkins-hbase17:40703] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7a7f44f931c3449aad0b48a0e2a717f3, entries=151, sequenceid=1128, filesize=26.7 K 2023-07-21 11:17:35,958 INFO [M:0;jenkins-hbase17:40703] regionserver.HRegion(2948): Finished flush of dataSize ~510.69 KB/522942, heapSize ~610.93 KB/625592, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 49ms, sequenceid=1128, compaction requested=false 2023-07-21 11:17:35,960 INFO [M:0;jenkins-hbase17:40703] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:35,960 DEBUG [M:0;jenkins-hbase17:40703] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:17:35,969 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:35,969 INFO [M:0;jenkins-hbase17:40703] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 11:17:35,970 INFO [M:0;jenkins-hbase17:40703] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:40703 2023-07-21 11:17:35,972 DEBUG [M:0;jenkins-hbase17:40703] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,40703,1689938222766 already deleted, retry=false 2023-07-21 11:17:36,143 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:36,143 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): master:40703-0x101879855f50000, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:36,144 INFO [M:0;jenkins-hbase17:40703] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,40703,1689938222766; zookeeper connection closed. 2023-07-21 11:17:36,244 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:36,244 INFO [RS:1;jenkins-hbase17:36863] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,36863,1689938225106; zookeeper connection closed. 2023-07-21 11:17:36,244 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:36863-0x101879855f50002, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:36,251 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@796e458] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@796e458 2023-07-21 11:17:36,344 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:36,344 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): regionserver:33011-0x101879855f50003, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:36,348 INFO [RS:2;jenkins-hbase17:33011] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,33011,1689938225358; zookeeper connection closed. 2023-07-21 11:17:36,352 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@31730798] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@31730798 2023-07-21 11:17:36,356 INFO [Listener at localhost.localdomain/38409] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-21 11:17:36,357 WARN [Listener at localhost.localdomain/38409] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 11:17:36,371 INFO [Listener at localhost.localdomain/38409] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 11:17:36,475 WARN [BP-1027894687-136.243.18.41-1689938218944 heartbeating to localhost.localdomain/127.0.0.1:38415] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 11:17:36,475 WARN [BP-1027894687-136.243.18.41-1689938218944 heartbeating to localhost.localdomain/127.0.0.1:38415] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1027894687-136.243.18.41-1689938218944 (Datanode Uuid d4d4284a-5481-46a8-929f-860ef8c6abc4) service to localhost.localdomain/127.0.0.1:38415 2023-07-21 11:17:36,477 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1/dfs/data/data5/current/BP-1027894687-136.243.18.41-1689938218944] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:36,477 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1/dfs/data/data6/current/BP-1027894687-136.243.18.41-1689938218944] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:36,481 WARN [Listener at localhost.localdomain/38409] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 11:17:36,486 INFO [Listener at localhost.localdomain/38409] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 11:17:36,490 WARN [BP-1027894687-136.243.18.41-1689938218944 heartbeating to localhost.localdomain/127.0.0.1:38415] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 11:17:36,490 WARN [BP-1027894687-136.243.18.41-1689938218944 heartbeating to localhost.localdomain/127.0.0.1:38415] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1027894687-136.243.18.41-1689938218944 (Datanode Uuid 8f07919f-8e51-45e6-bdb5-7bdfad95dc80) service to localhost.localdomain/127.0.0.1:38415 2023-07-21 11:17:36,491 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1/dfs/data/data3/current/BP-1027894687-136.243.18.41-1689938218944] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:36,491 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1/dfs/data/data4/current/BP-1027894687-136.243.18.41-1689938218944] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:36,493 WARN [Listener at localhost.localdomain/38409] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 11:17:36,513 INFO [Listener at localhost.localdomain/38409] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 11:17:36,622 WARN [BP-1027894687-136.243.18.41-1689938218944 heartbeating to localhost.localdomain/127.0.0.1:38415] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 11:17:36,622 WARN [BP-1027894687-136.243.18.41-1689938218944 heartbeating to localhost.localdomain/127.0.0.1:38415] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1027894687-136.243.18.41-1689938218944 (Datanode Uuid 759d3d8d-aad3-4b98-b90a-bcd18ad3f73f) service to localhost.localdomain/127.0.0.1:38415 2023-07-21 11:17:36,623 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1/dfs/data/data1/current/BP-1027894687-136.243.18.41-1689938218944] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:36,623 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/cluster_037526aa-760a-5f2b-e269-4eb750dd63c1/dfs/data/data2/current/BP-1027894687-136.243.18.41-1689938218944] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:36,660 INFO [Listener at localhost.localdomain/38409] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-21 11:17:36,777 INFO [Listener at localhost.localdomain/38409] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 11:17:36,839 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-21 11:17:36,839 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 11:17:36,840 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.log.dir so I do NOT create it in target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513 2023-07-21 11:17:36,840 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/53ba04d3-9fbe-1cd4-ba4c-823c59e925e1/hadoop.tmp.dir so I do NOT create it in target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513 2023-07-21 11:17:36,840 INFO [Listener at localhost.localdomain/38409] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/cluster_b86798cd-2ac0-1178-5510-e4b73708cff7, deleteOnExit=true 2023-07-21 11:17:36,840 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 11:17:36,841 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/test.cache.data in system properties and HBase conf 2023-07-21 11:17:36,841 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 11:17:36,841 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/hadoop.log.dir in system properties and HBase conf 2023-07-21 11:17:36,841 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 11:17:36,841 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 11:17:36,841 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 11:17:36,842 DEBUG [Listener at localhost.localdomain/38409] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 11:17:36,842 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 11:17:36,842 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 11:17:36,842 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 11:17:36,842 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 11:17:36,843 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 11:17:36,843 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 11:17:36,843 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 11:17:36,843 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 11:17:36,843 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 11:17:36,843 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/nfs.dump.dir in system properties and HBase conf 2023-07-21 11:17:36,843 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/java.io.tmpdir in system properties and HBase conf 2023-07-21 11:17:36,843 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 11:17:36,843 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 11:17:36,844 INFO [Listener at localhost.localdomain/38409] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 11:17:36,847 WARN [Listener at localhost.localdomain/38409] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 11:17:36,847 WARN [Listener at localhost.localdomain/38409] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 11:17:36,875 DEBUG [Listener at localhost.localdomain/38409-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101879855f5000a, quorum=127.0.0.1:63555, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 11:17:36,875 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101879855f5000a, quorum=127.0.0.1:63555, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 11:17:36,907 WARN [Listener at localhost.localdomain/38409] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:17:36,911 INFO [Listener at localhost.localdomain/38409] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:17:36,923 INFO [Listener at localhost.localdomain/38409] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/java.io.tmpdir/Jetty_localhost_localdomain_35189_hdfs____l25smi/webapp 2023-07-21 11:17:37,033 INFO [Listener at localhost.localdomain/38409] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:35189 2023-07-21 11:17:37,039 WARN [Listener at localhost.localdomain/38409] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 11:17:37,039 WARN [Listener at localhost.localdomain/38409] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 11:17:37,100 WARN [Listener at localhost.localdomain/42461] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:17:37,120 WARN [Listener at localhost.localdomain/42461] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 11:17:37,122 WARN [Listener at localhost.localdomain/42461] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:17:37,123 INFO [Listener at localhost.localdomain/42461] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:17:37,128 INFO [Listener at localhost.localdomain/42461] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/java.io.tmpdir/Jetty_localhost_34975_datanode____rjyy7x/webapp 2023-07-21 11:17:37,203 INFO [Listener at localhost.localdomain/42461] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34975 2023-07-21 11:17:37,223 WARN [Listener at localhost.localdomain/45987] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:17:37,253 WARN [Listener at localhost.localdomain/45987] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 11:17:37,256 WARN [Listener at localhost.localdomain/45987] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:17:37,258 INFO [Listener at localhost.localdomain/45987] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:17:37,265 INFO [Listener at localhost.localdomain/45987] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/java.io.tmpdir/Jetty_localhost_46455_datanode____.3uwhs7/webapp 2023-07-21 11:17:37,395 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa1f44f96abb9205b: Processing first storage report for DS-52662990-f118-4ed9-aadc-56f121229758 from datanode 2b053258-5df3-41a9-9601-60ca8e00ac80 2023-07-21 11:17:37,397 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa1f44f96abb9205b: from storage DS-52662990-f118-4ed9-aadc-56f121229758 node DatanodeRegistration(127.0.0.1:44343, datanodeUuid=2b053258-5df3-41a9-9601-60ca8e00ac80, infoPort=33165, infoSecurePort=0, ipcPort=45987, storageInfo=lv=-57;cid=testClusterID;nsid=1081664609;c=1689938256849), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-21 11:17:37,398 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa1f44f96abb9205b: Processing first storage report for DS-b409698d-9cdd-42a9-8c0c-9204543387c0 from datanode 2b053258-5df3-41a9-9601-60ca8e00ac80 2023-07-21 11:17:37,398 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa1f44f96abb9205b: from storage DS-b409698d-9cdd-42a9-8c0c-9204543387c0 node DatanodeRegistration(127.0.0.1:44343, datanodeUuid=2b053258-5df3-41a9-9601-60ca8e00ac80, infoPort=33165, infoSecurePort=0, ipcPort=45987, storageInfo=lv=-57;cid=testClusterID;nsid=1081664609;c=1689938256849), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:37,432 INFO [Listener at localhost.localdomain/45987] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46455 2023-07-21 11:17:37,464 WARN [Listener at localhost.localdomain/38719] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:17:37,516 WARN [Listener at localhost.localdomain/38719] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 11:17:37,519 WARN [Listener at localhost.localdomain/38719] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:17:37,520 INFO [Listener at localhost.localdomain/38719] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:17:37,524 INFO [Listener at localhost.localdomain/38719] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/java.io.tmpdir/Jetty_localhost_43523_datanode____a5rzr2/webapp 2023-07-21 11:17:37,578 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb3a6fe54ec9f1535: Processing first storage report for DS-c1cd11e1-3262-4c43-b439-2fbd01b617bc from datanode 5335fc3b-aacc-43ff-a74b-10b68b202ff2 2023-07-21 11:17:37,578 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb3a6fe54ec9f1535: from storage DS-c1cd11e1-3262-4c43-b439-2fbd01b617bc node DatanodeRegistration(127.0.0.1:36029, datanodeUuid=5335fc3b-aacc-43ff-a74b-10b68b202ff2, infoPort=46199, infoSecurePort=0, ipcPort=38719, storageInfo=lv=-57;cid=testClusterID;nsid=1081664609;c=1689938256849), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:37,578 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb3a6fe54ec9f1535: Processing first storage report for DS-951a78e5-e2c3-434d-86a5-90c1c2dcf3ae from datanode 5335fc3b-aacc-43ff-a74b-10b68b202ff2 2023-07-21 11:17:37,579 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb3a6fe54ec9f1535: from storage DS-951a78e5-e2c3-434d-86a5-90c1c2dcf3ae node DatanodeRegistration(127.0.0.1:36029, datanodeUuid=5335fc3b-aacc-43ff-a74b-10b68b202ff2, infoPort=46199, infoSecurePort=0, ipcPort=38719, storageInfo=lv=-57;cid=testClusterID;nsid=1081664609;c=1689938256849), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:37,636 INFO [Listener at localhost.localdomain/38719] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43523 2023-07-21 11:17:37,664 WARN [Listener at localhost.localdomain/34273] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:17:37,791 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2a3f128bc369cf99: Processing first storage report for DS-713925bb-3495-4800-b577-d82ab9b166e8 from datanode f15e7f2e-0dbc-4ab6-8164-94133f83e66b 2023-07-21 11:17:37,791 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2a3f128bc369cf99: from storage DS-713925bb-3495-4800-b577-d82ab9b166e8 node DatanodeRegistration(127.0.0.1:45531, datanodeUuid=f15e7f2e-0dbc-4ab6-8164-94133f83e66b, infoPort=38319, infoSecurePort=0, ipcPort=34273, storageInfo=lv=-57;cid=testClusterID;nsid=1081664609;c=1689938256849), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:37,791 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2a3f128bc369cf99: Processing first storage report for DS-c21b7a07-e691-42b2-96ab-14246ecee717 from datanode f15e7f2e-0dbc-4ab6-8164-94133f83e66b 2023-07-21 11:17:37,791 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2a3f128bc369cf99: from storage DS-c21b7a07-e691-42b2-96ab-14246ecee717 node DatanodeRegistration(127.0.0.1:45531, datanodeUuid=f15e7f2e-0dbc-4ab6-8164-94133f83e66b, infoPort=38319, infoSecurePort=0, ipcPort=34273, storageInfo=lv=-57;cid=testClusterID;nsid=1081664609;c=1689938256849), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:37,804 DEBUG [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513 2023-07-21 11:17:37,819 INFO [Listener at localhost.localdomain/34273] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/cluster_b86798cd-2ac0-1178-5510-e4b73708cff7/zookeeper_0, clientPort=62351, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/cluster_b86798cd-2ac0-1178-5510-e4b73708cff7/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/cluster_b86798cd-2ac0-1178-5510-e4b73708cff7/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 11:17:37,821 INFO [Listener at localhost.localdomain/34273] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62351 2023-07-21 11:17:37,822 INFO [Listener at localhost.localdomain/34273] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:37,823 INFO [Listener at localhost.localdomain/34273] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:37,863 INFO [Listener at localhost.localdomain/34273] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871 with version=8 2023-07-21 11:17:37,863 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/hbase-staging 2023-07-21 11:17:37,865 DEBUG [Listener at localhost.localdomain/34273] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 11:17:37,865 DEBUG [Listener at localhost.localdomain/34273] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 11:17:37,865 DEBUG [Listener at localhost.localdomain/34273] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 11:17:37,865 DEBUG [Listener at localhost.localdomain/34273] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 11:17:37,866 INFO [Listener at localhost.localdomain/34273] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:37,866 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:37,867 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:37,867 INFO [Listener at localhost.localdomain/34273] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:37,867 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:37,867 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:37,867 INFO [Listener at localhost.localdomain/34273] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:37,874 INFO [Listener at localhost.localdomain/34273] ipc.NettyRpcServer(120): Bind to /136.243.18.41:45117 2023-07-21 11:17:37,876 INFO [Listener at localhost.localdomain/34273] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:37,877 INFO [Listener at localhost.localdomain/34273] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:37,878 INFO [Listener at localhost.localdomain/34273] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45117 connecting to ZooKeeper ensemble=127.0.0.1:62351 2023-07-21 11:17:37,908 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:451170x0, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:37,913 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45117-0x1018798e2b60000 connected 2023-07-21 11:17:37,978 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ZKUtil(164): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:37,980 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ZKUtil(164): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:37,983 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ZKUtil(164): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:37,993 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45117 2023-07-21 11:17:37,993 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45117 2023-07-21 11:17:37,996 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45117 2023-07-21 11:17:37,998 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45117 2023-07-21 11:17:38,000 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45117 2023-07-21 11:17:38,002 INFO [Listener at localhost.localdomain/34273] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:38,003 INFO [Listener at localhost.localdomain/34273] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:38,003 INFO [Listener at localhost.localdomain/34273] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:38,003 INFO [Listener at localhost.localdomain/34273] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 11:17:38,003 INFO [Listener at localhost.localdomain/34273] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:38,004 INFO [Listener at localhost.localdomain/34273] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:38,004 INFO [Listener at localhost.localdomain/34273] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:38,004 INFO [Listener at localhost.localdomain/34273] http.HttpServer(1146): Jetty bound to port 39307 2023-07-21 11:17:38,004 INFO [Listener at localhost.localdomain/34273] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:38,026 INFO [Listener at localhost.localdomain/34273] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:38,027 INFO [Listener at localhost.localdomain/34273] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b6871bd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:38,027 INFO [Listener at localhost.localdomain/34273] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:38,028 INFO [Listener at localhost.localdomain/34273] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7c403817{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:38,140 INFO [Listener at localhost.localdomain/34273] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:38,142 INFO [Listener at localhost.localdomain/34273] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:38,142 INFO [Listener at localhost.localdomain/34273] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:38,143 INFO [Listener at localhost.localdomain/34273] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:17:38,144 INFO [Listener at localhost.localdomain/34273] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:38,146 INFO [Listener at localhost.localdomain/34273] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1e8e9bbe{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/java.io.tmpdir/jetty-0_0_0_0-39307-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3862674300286876371/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 11:17:38,148 INFO [Listener at localhost.localdomain/34273] server.AbstractConnector(333): Started ServerConnector@3cabcf74{HTTP/1.1, (http/1.1)}{0.0.0.0:39307} 2023-07-21 11:17:38,148 INFO [Listener at localhost.localdomain/34273] server.Server(415): Started @41180ms 2023-07-21 11:17:38,149 INFO [Listener at localhost.localdomain/34273] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871, hbase.cluster.distributed=false 2023-07-21 11:17:38,169 INFO [Listener at localhost.localdomain/34273] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:38,169 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:38,170 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:38,170 INFO [Listener at localhost.localdomain/34273] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:38,170 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:38,170 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:38,170 INFO [Listener at localhost.localdomain/34273] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:38,174 INFO [Listener at localhost.localdomain/34273] ipc.NettyRpcServer(120): Bind to /136.243.18.41:34379 2023-07-21 11:17:38,175 INFO [Listener at localhost.localdomain/34273] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:17:38,192 DEBUG [Listener at localhost.localdomain/34273] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:17:38,193 INFO [Listener at localhost.localdomain/34273] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:38,195 INFO [Listener at localhost.localdomain/34273] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:38,197 INFO [Listener at localhost.localdomain/34273] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34379 connecting to ZooKeeper ensemble=127.0.0.1:62351 2023-07-21 11:17:38,216 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:343790x0, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:38,221 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ZKUtil(164): regionserver:343790x0, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:38,222 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34379-0x1018798e2b60001 connected 2023-07-21 11:17:38,224 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ZKUtil(164): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:38,225 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ZKUtil(164): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:38,232 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34379 2023-07-21 11:17:38,233 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34379 2023-07-21 11:17:38,247 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34379 2023-07-21 11:17:38,252 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34379 2023-07-21 11:17:38,256 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34379 2023-07-21 11:17:38,259 INFO [Listener at localhost.localdomain/34273] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:38,259 INFO [Listener at localhost.localdomain/34273] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:38,259 INFO [Listener at localhost.localdomain/34273] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:38,260 INFO [Listener at localhost.localdomain/34273] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:17:38,260 INFO [Listener at localhost.localdomain/34273] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:38,260 INFO [Listener at localhost.localdomain/34273] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:38,260 INFO [Listener at localhost.localdomain/34273] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:38,261 INFO [Listener at localhost.localdomain/34273] http.HttpServer(1146): Jetty bound to port 42371 2023-07-21 11:17:38,262 INFO [Listener at localhost.localdomain/34273] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:38,268 INFO [Listener at localhost.localdomain/34273] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:38,268 INFO [Listener at localhost.localdomain/34273] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@266b9be0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:38,269 INFO [Listener at localhost.localdomain/34273] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:38,269 INFO [Listener at localhost.localdomain/34273] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@46d1080d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:38,388 INFO [Listener at localhost.localdomain/34273] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:38,389 INFO [Listener at localhost.localdomain/34273] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:38,389 INFO [Listener at localhost.localdomain/34273] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:38,389 INFO [Listener at localhost.localdomain/34273] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:17:38,392 INFO [Listener at localhost.localdomain/34273] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:38,393 INFO [Listener at localhost.localdomain/34273] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@63d31eea{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/java.io.tmpdir/jetty-0_0_0_0-42371-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2657144783504249867/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:38,395 INFO [Listener at localhost.localdomain/34273] server.AbstractConnector(333): Started ServerConnector@4320000f{HTTP/1.1, (http/1.1)}{0.0.0.0:42371} 2023-07-21 11:17:38,395 INFO [Listener at localhost.localdomain/34273] server.Server(415): Started @41427ms 2023-07-21 11:17:38,412 INFO [Listener at localhost.localdomain/34273] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:38,412 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:38,413 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:38,413 INFO [Listener at localhost.localdomain/34273] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:38,413 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:38,413 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:38,413 INFO [Listener at localhost.localdomain/34273] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:38,420 INFO [Listener at localhost.localdomain/34273] ipc.NettyRpcServer(120): Bind to /136.243.18.41:43393 2023-07-21 11:17:38,421 INFO [Listener at localhost.localdomain/34273] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:17:38,426 DEBUG [Listener at localhost.localdomain/34273] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:17:38,427 INFO [Listener at localhost.localdomain/34273] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:38,429 INFO [Listener at localhost.localdomain/34273] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:38,431 INFO [Listener at localhost.localdomain/34273] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43393 connecting to ZooKeeper ensemble=127.0.0.1:62351 2023-07-21 11:17:38,439 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:433930x0, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:38,440 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ZKUtil(164): regionserver:433930x0, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:38,442 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ZKUtil(164): regionserver:433930x0, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:38,443 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ZKUtil(164): regionserver:433930x0, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:38,453 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43393-0x1018798e2b60002 connected 2023-07-21 11:17:38,456 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43393 2023-07-21 11:17:38,460 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43393 2023-07-21 11:17:38,468 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43393 2023-07-21 11:17:38,475 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43393 2023-07-21 11:17:38,476 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43393 2023-07-21 11:17:38,479 INFO [Listener at localhost.localdomain/34273] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:38,479 INFO [Listener at localhost.localdomain/34273] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:38,479 INFO [Listener at localhost.localdomain/34273] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:38,480 INFO [Listener at localhost.localdomain/34273] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:17:38,481 INFO [Listener at localhost.localdomain/34273] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:38,481 INFO [Listener at localhost.localdomain/34273] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:38,481 INFO [Listener at localhost.localdomain/34273] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:38,482 INFO [Listener at localhost.localdomain/34273] http.HttpServer(1146): Jetty bound to port 37431 2023-07-21 11:17:38,482 INFO [Listener at localhost.localdomain/34273] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:38,510 INFO [Listener at localhost.localdomain/34273] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:38,510 INFO [Listener at localhost.localdomain/34273] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@c739b3a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:38,511 INFO [Listener at localhost.localdomain/34273] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:38,511 INFO [Listener at localhost.localdomain/34273] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7af00567{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:38,637 INFO [Listener at localhost.localdomain/34273] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:38,639 INFO [Listener at localhost.localdomain/34273] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:38,639 INFO [Listener at localhost.localdomain/34273] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:38,639 INFO [Listener at localhost.localdomain/34273] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:17:38,640 INFO [Listener at localhost.localdomain/34273] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:38,642 INFO [Listener at localhost.localdomain/34273] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4beeb415{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/java.io.tmpdir/jetty-0_0_0_0-37431-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5926175397038284249/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:38,643 INFO [Listener at localhost.localdomain/34273] server.AbstractConnector(333): Started ServerConnector@786f5134{HTTP/1.1, (http/1.1)}{0.0.0.0:37431} 2023-07-21 11:17:38,643 INFO [Listener at localhost.localdomain/34273] server.Server(415): Started @41675ms 2023-07-21 11:17:38,654 INFO [Listener at localhost.localdomain/34273] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:38,654 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:38,654 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:38,655 INFO [Listener at localhost.localdomain/34273] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:38,655 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:38,655 INFO [Listener at localhost.localdomain/34273] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:38,655 INFO [Listener at localhost.localdomain/34273] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:38,657 INFO [Listener at localhost.localdomain/34273] ipc.NettyRpcServer(120): Bind to /136.243.18.41:36661 2023-07-21 11:17:38,658 INFO [Listener at localhost.localdomain/34273] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:17:38,659 DEBUG [Listener at localhost.localdomain/34273] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:17:38,660 INFO [Listener at localhost.localdomain/34273] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:38,661 INFO [Listener at localhost.localdomain/34273] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:38,662 INFO [Listener at localhost.localdomain/34273] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36661 connecting to ZooKeeper ensemble=127.0.0.1:62351 2023-07-21 11:17:38,665 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:366610x0, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:38,666 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ZKUtil(164): regionserver:366610x0, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:38,667 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36661-0x1018798e2b60003 connected 2023-07-21 11:17:38,667 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ZKUtil(164): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:38,668 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ZKUtil(164): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:38,668 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36661 2023-07-21 11:17:38,668 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36661 2023-07-21 11:17:38,668 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36661 2023-07-21 11:17:38,669 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36661 2023-07-21 11:17:38,669 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36661 2023-07-21 11:17:38,671 INFO [Listener at localhost.localdomain/34273] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:38,671 INFO [Listener at localhost.localdomain/34273] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:38,671 INFO [Listener at localhost.localdomain/34273] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:38,671 INFO [Listener at localhost.localdomain/34273] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:17:38,671 INFO [Listener at localhost.localdomain/34273] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:38,672 INFO [Listener at localhost.localdomain/34273] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:38,672 INFO [Listener at localhost.localdomain/34273] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:38,672 INFO [Listener at localhost.localdomain/34273] http.HttpServer(1146): Jetty bound to port 39217 2023-07-21 11:17:38,672 INFO [Listener at localhost.localdomain/34273] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:38,679 INFO [Listener at localhost.localdomain/34273] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:38,679 INFO [Listener at localhost.localdomain/34273] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d868100{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:38,679 INFO [Listener at localhost.localdomain/34273] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:38,679 INFO [Listener at localhost.localdomain/34273] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@29d6565a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:38,776 INFO [Listener at localhost.localdomain/34273] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:38,777 INFO [Listener at localhost.localdomain/34273] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:38,777 INFO [Listener at localhost.localdomain/34273] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:38,777 INFO [Listener at localhost.localdomain/34273] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:17:38,778 INFO [Listener at localhost.localdomain/34273] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:38,779 INFO [Listener at localhost.localdomain/34273] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6a4ea3be{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/java.io.tmpdir/jetty-0_0_0_0-39217-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7191250184256756136/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:38,780 INFO [Listener at localhost.localdomain/34273] server.AbstractConnector(333): Started ServerConnector@694a562a{HTTP/1.1, (http/1.1)}{0.0.0.0:39217} 2023-07-21 11:17:38,780 INFO [Listener at localhost.localdomain/34273] server.Server(415): Started @41812ms 2023-07-21 11:17:38,783 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:38,790 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@2c33bc64{HTTP/1.1, (http/1.1)}{0.0.0.0:43775} 2023-07-21 11:17:38,790 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(415): Started @41822ms 2023-07-21 11:17:38,790 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,45117,1689938257866 2023-07-21 11:17:38,791 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 11:17:38,792 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,45117,1689938257866 2023-07-21 11:17:38,793 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:38,793 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:38,793 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:38,793 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:38,794 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:38,795 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:17:38,797 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:17:38,797 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,45117,1689938257866 from backup master directory 2023-07-21 11:17:38,798 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,45117,1689938257866 2023-07-21 11:17:38,798 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 11:17:38,798 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:38,798 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,45117,1689938257866 2023-07-21 11:17:38,816 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/hbase.id with ID: 971626ae-9b74-415c-a4d1-e937430f38c0 2023-07-21 11:17:38,827 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:38,830 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:38,850 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1cc486a5 to 127.0.0.1:62351 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:38,856 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ed92ef, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:38,856 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:38,857 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 11:17:38,860 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:38,862 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/MasterData/data/master/store-tmp 2023-07-21 11:17:38,876 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:38,876 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 11:17:38,876 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:38,876 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:38,876 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 11:17:38,876 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:38,876 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:38,877 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:17:38,877 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/MasterData/WALs/jenkins-hbase17.apache.org,45117,1689938257866 2023-07-21 11:17:38,880 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C45117%2C1689938257866, suffix=, logDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/MasterData/WALs/jenkins-hbase17.apache.org,45117,1689938257866, archiveDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/MasterData/oldWALs, maxLogs=10 2023-07-21 11:17:38,901 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45531,DS-713925bb-3495-4800-b577-d82ab9b166e8,DISK] 2023-07-21 11:17:38,901 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36029,DS-c1cd11e1-3262-4c43-b439-2fbd01b617bc,DISK] 2023-07-21 11:17:38,901 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44343,DS-52662990-f118-4ed9-aadc-56f121229758,DISK] 2023-07-21 11:17:38,908 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/MasterData/WALs/jenkins-hbase17.apache.org,45117,1689938257866/jenkins-hbase17.apache.org%2C45117%2C1689938257866.1689938258881 2023-07-21 11:17:38,909 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44343,DS-52662990-f118-4ed9-aadc-56f121229758,DISK], DatanodeInfoWithStorage[127.0.0.1:45531,DS-713925bb-3495-4800-b577-d82ab9b166e8,DISK], DatanodeInfoWithStorage[127.0.0.1:36029,DS-c1cd11e1-3262-4c43-b439-2fbd01b617bc,DISK]] 2023-07-21 11:17:38,909 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:38,909 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:38,909 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:38,909 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:38,912 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:38,914 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 11:17:38,915 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 11:17:38,915 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:38,916 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:38,917 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:38,920 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:38,922 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:38,922 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10098942080, jitterRate=-0.05946272611618042}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:38,922 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:17:38,922 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 11:17:38,924 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 11:17:38,924 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 11:17:38,924 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 11:17:38,925 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 11:17:38,925 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-21 11:17:38,925 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 11:17:38,927 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 11:17:38,928 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 11:17:38,929 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 11:17:38,929 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 11:17:38,930 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 11:17:38,933 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:38,934 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 11:17:38,934 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 11:17:38,935 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 11:17:38,936 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:38,936 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:38,938 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:38,938 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:38,938 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:38,939 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,45117,1689938257866, sessionid=0x1018798e2b60000, setting cluster-up flag (Was=false) 2023-07-21 11:17:38,943 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:38,945 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 11:17:38,946 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,45117,1689938257866 2023-07-21 11:17:38,948 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:38,951 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 11:17:38,951 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,45117,1689938257866 2023-07-21 11:17:38,952 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.hbase-snapshot/.tmp 2023-07-21 11:17:38,966 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 11:17:38,966 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 11:17:38,968 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 11:17:38,980 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,45117,1689938257866] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:38,980 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 11:17:38,981 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-21 11:17:38,986 INFO [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(951): ClusterId : 971626ae-9b74-415c-a4d1-e937430f38c0 2023-07-21 11:17:38,991 DEBUG [RS:2;jenkins-hbase17:36661] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:17:38,993 DEBUG [RS:2;jenkins-hbase17:36661] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:17:38,993 DEBUG [RS:2;jenkins-hbase17:36661] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:17:38,994 DEBUG [RS:2;jenkins-hbase17:36661] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:17:38,997 INFO [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(951): ClusterId : 971626ae-9b74-415c-a4d1-e937430f38c0 2023-07-21 11:17:38,997 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 11:17:38,997 DEBUG [RS:2;jenkins-hbase17:36661] zookeeper.ReadOnlyZKClient(139): Connect 0x0233e372 to 127.0.0.1:62351 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:39,005 INFO [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(951): ClusterId : 971626ae-9b74-415c-a4d1-e937430f38c0 2023-07-21 11:17:39,005 DEBUG [RS:1;jenkins-hbase17:43393] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:17:39,007 DEBUG [RS:0;jenkins-hbase17:34379] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:17:39,010 DEBUG [RS:0;jenkins-hbase17:34379] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:17:39,010 DEBUG [RS:1;jenkins-hbase17:43393] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:17:39,010 DEBUG [RS:1;jenkins-hbase17:43393] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:17:39,010 DEBUG [RS:0;jenkins-hbase17:34379] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:17:39,011 DEBUG [RS:1;jenkins-hbase17:43393] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:17:39,011 DEBUG [RS:0;jenkins-hbase17:34379] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:17:39,018 DEBUG [RS:1;jenkins-hbase17:43393] zookeeper.ReadOnlyZKClient(139): Connect 0x19489931 to 127.0.0.1:62351 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:39,018 DEBUG [RS:0;jenkins-hbase17:34379] zookeeper.ReadOnlyZKClient(139): Connect 0x099494b4 to 127.0.0.1:62351 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:39,021 DEBUG [RS:2;jenkins-hbase17:36661] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cf40d44, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:39,021 DEBUG [RS:2;jenkins-hbase17:36661] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8cb8802, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:39,028 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 11:17:39,032 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 11:17:39,035 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 11:17:39,035 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 11:17:39,035 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:17:39,035 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:17:39,036 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:17:39,036 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:17:39,036 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-07-21 11:17:39,036 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,036 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:39,036 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,041 DEBUG [RS:2;jenkins-hbase17:36661] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase17:36661 2023-07-21 11:17:39,041 INFO [RS:2;jenkins-hbase17:36661] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:17:39,041 INFO [RS:2;jenkins-hbase17:36661] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:17:39,041 DEBUG [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:17:39,042 INFO [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,45117,1689938257866 with isa=jenkins-hbase17.apache.org/136.243.18.41:36661, startcode=1689938258653 2023-07-21 11:17:39,042 DEBUG [RS:2;jenkins-hbase17:36661] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:17:39,048 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689938289048 2023-07-21 11:17:39,053 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 11:17:39,054 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 11:17:39,054 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 11:17:39,054 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 11:17:39,054 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 11:17:39,054 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 11:17:39,055 DEBUG [RS:1;jenkins-hbase17:43393] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@13b0f241, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:39,055 DEBUG [RS:1;jenkins-hbase17:43393] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@56b26e53, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:39,055 DEBUG [RS:0;jenkins-hbase17:34379] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36cffc4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:39,055 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:33919, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:17:39,055 DEBUG [RS:0;jenkins-hbase17:34379] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@466e61cd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:39,060 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,061 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45117] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:39,065 DEBUG [PEWorker-2] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 11:17:39,067 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 11:17:39,067 INFO [PEWorker-2] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 11:17:39,067 DEBUG [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 11:17:39,067 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 11:17:39,067 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 11:17:39,067 WARN [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 11:17:39,068 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 11:17:39,068 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 11:17:39,068 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938259068,5,FailOnTimeoutGroup] 2023-07-21 11:17:39,069 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938259068,5,FailOnTimeoutGroup] 2023-07-21 11:17:39,069 INFO [PEWorker-2] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:39,069 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,069 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 11:17:39,069 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,069 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,070 DEBUG [RS:0;jenkins-hbase17:34379] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:34379 2023-07-21 11:17:39,070 INFO [RS:0;jenkins-hbase17:34379] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:17:39,070 INFO [RS:0;jenkins-hbase17:34379] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:17:39,070 DEBUG [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:17:39,072 DEBUG [RS:1;jenkins-hbase17:43393] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:43393 2023-07-21 11:17:39,072 INFO [RS:1;jenkins-hbase17:43393] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:17:39,072 INFO [RS:1;jenkins-hbase17:43393] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:17:39,072 INFO [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,45117,1689938257866 with isa=jenkins-hbase17.apache.org/136.243.18.41:34379, startcode=1689938258169 2023-07-21 11:17:39,072 DEBUG [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:17:39,072 DEBUG [RS:0;jenkins-hbase17:34379] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:17:39,072 INFO [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,45117,1689938257866 with isa=jenkins-hbase17.apache.org/136.243.18.41:43393, startcode=1689938258412 2023-07-21 11:17:39,072 DEBUG [RS:1;jenkins-hbase17:43393] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:17:39,083 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57329, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:17:39,083 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:44767, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:17:39,084 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45117] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:39,084 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,45117,1689938257866] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:39,085 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45117] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:39,085 DEBUG [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871 2023-07-21 11:17:39,085 DEBUG [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:42461 2023-07-21 11:17:39,085 DEBUG [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871 2023-07-21 11:17:39,085 DEBUG [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39307 2023-07-21 11:17:39,085 DEBUG [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:42461 2023-07-21 11:17:39,085 DEBUG [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39307 2023-07-21 11:17:39,089 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:39,091 DEBUG [RS:1;jenkins-hbase17:43393] zookeeper.ZKUtil(162): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:39,091 WARN [RS:1;jenkins-hbase17:43393] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:39,091 INFO [RS:1;jenkins-hbase17:43393] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:39,091 DEBUG [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/WALs/jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:39,096 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,45117,1689938257866] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 11:17:39,096 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,45117,1689938257866] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:39,096 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,45117,1689938257866] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 11:17:39,097 DEBUG [RS:0;jenkins-hbase17:34379] zookeeper.ZKUtil(162): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:39,097 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,43393,1689938258412] 2023-07-21 11:17:39,097 WARN [RS:0;jenkins-hbase17:34379] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:39,097 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,34379,1689938258169] 2023-07-21 11:17:39,097 INFO [RS:0;jenkins-hbase17:34379] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:39,097 DEBUG [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/WALs/jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:39,105 DEBUG [RS:1;jenkins-hbase17:43393] zookeeper.ZKUtil(162): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:39,105 DEBUG [RS:1;jenkins-hbase17:43393] zookeeper.ZKUtil(162): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:39,106 DEBUG [RS:1;jenkins-hbase17:43393] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:17:39,106 INFO [RS:1;jenkins-hbase17:43393] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:17:39,122 DEBUG [RS:0;jenkins-hbase17:34379] zookeeper.ZKUtil(162): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:39,122 INFO [RS:1;jenkins-hbase17:43393] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:17:39,122 DEBUG [RS:0;jenkins-hbase17:34379] zookeeper.ZKUtil(162): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:39,122 INFO [RS:1;jenkins-hbase17:43393] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:17:39,122 INFO [RS:1;jenkins-hbase17:43393] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,122 INFO [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:17:39,124 DEBUG [RS:0;jenkins-hbase17:34379] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:17:39,124 INFO [RS:0;jenkins-hbase17:34379] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:17:39,125 INFO [RS:1;jenkins-hbase17:43393] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,125 DEBUG [RS:1;jenkins-hbase17:43393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,125 DEBUG [RS:1;jenkins-hbase17:43393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,125 DEBUG [RS:1;jenkins-hbase17:43393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,125 DEBUG [RS:1;jenkins-hbase17:43393] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,125 DEBUG [RS:1;jenkins-hbase17:43393] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,125 DEBUG [RS:1;jenkins-hbase17:43393] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:39,126 DEBUG [RS:1;jenkins-hbase17:43393] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,126 DEBUG [RS:1;jenkins-hbase17:43393] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,126 DEBUG [RS:1;jenkins-hbase17:43393] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,126 DEBUG [RS:1;jenkins-hbase17:43393] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,126 INFO [RS:0;jenkins-hbase17:34379] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:17:39,127 INFO [RS:1;jenkins-hbase17:43393] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,127 INFO [RS:0;jenkins-hbase17:34379] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:17:39,127 INFO [RS:1;jenkins-hbase17:43393] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,127 INFO [RS:1;jenkins-hbase17:43393] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,127 INFO [RS:1;jenkins-hbase17:43393] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,127 INFO [RS:0;jenkins-hbase17:34379] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,128 INFO [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:17:39,129 INFO [RS:0;jenkins-hbase17:34379] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,129 DEBUG [RS:0;jenkins-hbase17:34379] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,129 DEBUG [RS:0;jenkins-hbase17:34379] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,129 DEBUG [RS:0;jenkins-hbase17:34379] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,130 DEBUG [RS:0;jenkins-hbase17:34379] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,130 DEBUG [RS:0;jenkins-hbase17:34379] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,130 DEBUG [RS:0;jenkins-hbase17:34379] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:39,130 DEBUG [RS:0;jenkins-hbase17:34379] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,132 DEBUG [RS:0;jenkins-hbase17:34379] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,134 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:39,134 DEBUG [RS:0;jenkins-hbase17:34379] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,134 DEBUG [RS:0;jenkins-hbase17:34379] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,135 INFO [PEWorker-2] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:39,136 INFO [PEWorker-2] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871 2023-07-21 11:17:39,147 INFO [RS:1;jenkins-hbase17:43393] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:17:39,147 INFO [RS:1;jenkins-hbase17:43393] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43393,1689938258412-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,148 INFO [RS:0;jenkins-hbase17:34379] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,149 INFO [RS:0;jenkins-hbase17:34379] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,149 INFO [RS:0;jenkins-hbase17:34379] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,149 INFO [RS:0;jenkins-hbase17:34379] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,163 INFO [RS:0;jenkins-hbase17:34379] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:17:39,164 INFO [RS:0;jenkins-hbase17:34379] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,34379,1689938258169-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,168 INFO [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,45117,1689938257866 with isa=jenkins-hbase17.apache.org/136.243.18.41:36661, startcode=1689938258653 2023-07-21 11:17:39,169 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45117] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:39,169 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,45117,1689938257866] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:39,169 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,45117,1689938257866] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 11:17:39,169 DEBUG [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871 2023-07-21 11:17:39,169 DEBUG [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:42461 2023-07-21 11:17:39,169 DEBUG [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39307 2023-07-21 11:17:39,170 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:39,170 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:39,170 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:39,171 DEBUG [RS:2;jenkins-hbase17:36661] zookeeper.ZKUtil(162): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:39,171 WARN [RS:2;jenkins-hbase17:36661] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:39,171 INFO [RS:2;jenkins-hbase17:36661] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:39,171 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:39,171 DEBUG [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/WALs/jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:39,171 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:39,171 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:39,172 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:39,172 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:39,172 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:39,172 INFO [RS:1;jenkins-hbase17:43393] regionserver.Replication(203): jenkins-hbase17.apache.org,43393,1689938258412 started 2023-07-21 11:17:39,172 INFO [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,43393,1689938258412, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:43393, sessionid=0x1018798e2b60002 2023-07-21 11:17:39,173 DEBUG [RS:1;jenkins-hbase17:43393] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:17:39,173 DEBUG [RS:1;jenkins-hbase17:43393] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:39,174 DEBUG [RS:1;jenkins-hbase17:43393] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,43393,1689938258412' 2023-07-21 11:17:39,174 DEBUG [RS:1;jenkins-hbase17:43393] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:17:39,174 DEBUG [RS:1;jenkins-hbase17:43393] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:17:39,174 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,36661,1689938258653] 2023-07-21 11:17:39,176 DEBUG [RS:1;jenkins-hbase17:43393] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:17:39,176 DEBUG [RS:1;jenkins-hbase17:43393] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:17:39,176 DEBUG [RS:1;jenkins-hbase17:43393] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:39,176 DEBUG [RS:1;jenkins-hbase17:43393] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,43393,1689938258412' 2023-07-21 11:17:39,176 DEBUG [RS:1;jenkins-hbase17:43393] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:17:39,177 DEBUG [RS:1;jenkins-hbase17:43393] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:17:39,178 DEBUG [RS:1;jenkins-hbase17:43393] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:17:39,178 INFO [RS:1;jenkins-hbase17:43393] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 11:17:39,179 DEBUG [RS:2;jenkins-hbase17:36661] zookeeper.ZKUtil(162): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:39,179 DEBUG [RS:2;jenkins-hbase17:36661] zookeeper.ZKUtil(162): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:39,179 DEBUG [RS:2;jenkins-hbase17:36661] zookeeper.ZKUtil(162): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:39,180 DEBUG [RS:2;jenkins-hbase17:36661] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:17:39,180 INFO [RS:2;jenkins-hbase17:36661] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:17:39,180 INFO [RS:1;jenkins-hbase17:43393] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,184 DEBUG [RS:1;jenkins-hbase17:43393] zookeeper.ZKUtil(398): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 11:17:39,184 INFO [RS:1;jenkins-hbase17:43393] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 11:17:39,184 INFO [RS:1;jenkins-hbase17:43393] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,185 INFO [RS:1;jenkins-hbase17:43393] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,188 INFO [RS:2;jenkins-hbase17:36661] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:17:39,188 INFO [RS:0;jenkins-hbase17:34379] regionserver.Replication(203): jenkins-hbase17.apache.org,34379,1689938258169 started 2023-07-21 11:17:39,188 INFO [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,34379,1689938258169, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:34379, sessionid=0x1018798e2b60001 2023-07-21 11:17:39,189 DEBUG [RS:0;jenkins-hbase17:34379] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:17:39,189 DEBUG [RS:0;jenkins-hbase17:34379] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:39,189 DEBUG [PEWorker-2] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:39,189 DEBUG [RS:0;jenkins-hbase17:34379] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,34379,1689938258169' 2023-07-21 11:17:39,189 DEBUG [RS:0;jenkins-hbase17:34379] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:17:39,189 INFO [RS:2;jenkins-hbase17:36661] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:17:39,189 INFO [RS:2;jenkins-hbase17:36661] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,189 DEBUG [RS:0;jenkins-hbase17:34379] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:17:39,190 DEBUG [RS:0;jenkins-hbase17:34379] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:17:39,190 DEBUG [RS:0;jenkins-hbase17:34379] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:17:39,190 DEBUG [RS:0;jenkins-hbase17:34379] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:39,190 DEBUG [RS:0;jenkins-hbase17:34379] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,34379,1689938258169' 2023-07-21 11:17:39,190 DEBUG [RS:0;jenkins-hbase17:34379] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:17:39,190 DEBUG [RS:0;jenkins-hbase17:34379] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:17:39,190 DEBUG [RS:0;jenkins-hbase17:34379] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:17:39,190 INFO [RS:0;jenkins-hbase17:34379] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 11:17:39,190 INFO [RS:0;jenkins-hbase17:34379] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,191 DEBUG [RS:0;jenkins-hbase17:34379] zookeeper.ZKUtil(398): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 11:17:39,191 INFO [RS:0;jenkins-hbase17:34379] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 11:17:39,191 INFO [RS:0;jenkins-hbase17:34379] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,191 INFO [RS:0;jenkins-hbase17:34379] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,194 INFO [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:17:39,195 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 11:17:39,195 INFO [RS:2;jenkins-hbase17:36661] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,196 DEBUG [RS:2;jenkins-hbase17:36661] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,197 DEBUG [RS:2;jenkins-hbase17:36661] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,197 DEBUG [RS:2;jenkins-hbase17:36661] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,197 DEBUG [RS:2;jenkins-hbase17:36661] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,197 DEBUG [RS:2;jenkins-hbase17:36661] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,197 DEBUG [RS:2;jenkins-hbase17:36661] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:39,197 DEBUG [RS:2;jenkins-hbase17:36661] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,197 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/info 2023-07-21 11:17:39,197 DEBUG [RS:2;jenkins-hbase17:36661] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,197 DEBUG [RS:2;jenkins-hbase17:36661] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,197 DEBUG [RS:2;jenkins-hbase17:36661] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:39,198 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 11:17:39,198 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:39,198 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 11:17:39,200 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:17:39,201 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 11:17:39,202 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:39,202 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 11:17:39,203 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/table 2023-07-21 11:17:39,204 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 11:17:39,204 INFO [RS:2;jenkins-hbase17:36661] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,204 INFO [RS:2;jenkins-hbase17:36661] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,204 INFO [RS:2;jenkins-hbase17:36661] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,204 INFO [RS:2;jenkins-hbase17:36661] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,205 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:39,209 DEBUG [PEWorker-2] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740 2023-07-21 11:17:39,210 DEBUG [PEWorker-2] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740 2023-07-21 11:17:39,212 DEBUG [PEWorker-2] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 11:17:39,220 DEBUG [PEWorker-2] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 11:17:39,220 INFO [RS:2;jenkins-hbase17:36661] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:17:39,221 INFO [RS:2;jenkins-hbase17:36661] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36661,1689938258653-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,222 DEBUG [PEWorker-2] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:39,223 INFO [PEWorker-2] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11843625120, jitterRate=0.10302354395389557}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 11:17:39,223 DEBUG [PEWorker-2] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 11:17:39,223 DEBUG [PEWorker-2] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 11:17:39,223 INFO [PEWorker-2] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 11:17:39,223 DEBUG [PEWorker-2] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 11:17:39,223 DEBUG [PEWorker-2] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 11:17:39,223 DEBUG [PEWorker-2] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 11:17:39,224 INFO [PEWorker-2] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 11:17:39,224 DEBUG [PEWorker-2] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 11:17:39,225 DEBUG [PEWorker-2] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 11:17:39,225 INFO [PEWorker-2] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 11:17:39,225 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 11:17:39,226 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 11:17:39,228 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 11:17:39,239 INFO [RS:2;jenkins-hbase17:36661] regionserver.Replication(203): jenkins-hbase17.apache.org,36661,1689938258653 started 2023-07-21 11:17:39,239 INFO [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,36661,1689938258653, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:36661, sessionid=0x1018798e2b60003 2023-07-21 11:17:39,239 DEBUG [RS:2;jenkins-hbase17:36661] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:17:39,239 DEBUG [RS:2;jenkins-hbase17:36661] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:39,239 DEBUG [RS:2;jenkins-hbase17:36661] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36661,1689938258653' 2023-07-21 11:17:39,239 DEBUG [RS:2;jenkins-hbase17:36661] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:17:39,240 DEBUG [RS:2;jenkins-hbase17:36661] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:17:39,240 DEBUG [RS:2;jenkins-hbase17:36661] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:17:39,240 DEBUG [RS:2;jenkins-hbase17:36661] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:17:39,240 DEBUG [RS:2;jenkins-hbase17:36661] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:39,240 DEBUG [RS:2;jenkins-hbase17:36661] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36661,1689938258653' 2023-07-21 11:17:39,240 DEBUG [RS:2;jenkins-hbase17:36661] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:17:39,241 DEBUG [RS:2;jenkins-hbase17:36661] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:17:39,241 DEBUG [RS:2;jenkins-hbase17:36661] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:17:39,241 INFO [RS:2;jenkins-hbase17:36661] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 11:17:39,241 INFO [RS:2;jenkins-hbase17:36661] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,242 DEBUG [RS:2;jenkins-hbase17:36661] zookeeper.ZKUtil(398): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 11:17:39,242 INFO [RS:2;jenkins-hbase17:36661] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 11:17:39,242 INFO [RS:2;jenkins-hbase17:36661] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,242 INFO [RS:2;jenkins-hbase17:36661] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,288 INFO [RS:1;jenkins-hbase17:43393] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C43393%2C1689938258412, suffix=, logDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/WALs/jenkins-hbase17.apache.org,43393,1689938258412, archiveDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/oldWALs, maxLogs=32 2023-07-21 11:17:39,293 INFO [RS:0;jenkins-hbase17:34379] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C34379%2C1689938258169, suffix=, logDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/WALs/jenkins-hbase17.apache.org,34379,1689938258169, archiveDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/oldWALs, maxLogs=32 2023-07-21 11:17:39,310 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44343,DS-52662990-f118-4ed9-aadc-56f121229758,DISK] 2023-07-21 11:17:39,310 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36029,DS-c1cd11e1-3262-4c43-b439-2fbd01b617bc,DISK] 2023-07-21 11:17:39,311 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45531,DS-713925bb-3495-4800-b577-d82ab9b166e8,DISK] 2023-07-21 11:17:39,320 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44343,DS-52662990-f118-4ed9-aadc-56f121229758,DISK] 2023-07-21 11:17:39,320 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36029,DS-c1cd11e1-3262-4c43-b439-2fbd01b617bc,DISK] 2023-07-21 11:17:39,320 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45531,DS-713925bb-3495-4800-b577-d82ab9b166e8,DISK] 2023-07-21 11:17:39,321 INFO [RS:1;jenkins-hbase17:43393] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/WALs/jenkins-hbase17.apache.org,43393,1689938258412/jenkins-hbase17.apache.org%2C43393%2C1689938258412.1689938259289 2023-07-21 11:17:39,333 DEBUG [RS:1;jenkins-hbase17:43393] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44343,DS-52662990-f118-4ed9-aadc-56f121229758,DISK], DatanodeInfoWithStorage[127.0.0.1:45531,DS-713925bb-3495-4800-b577-d82ab9b166e8,DISK], DatanodeInfoWithStorage[127.0.0.1:36029,DS-c1cd11e1-3262-4c43-b439-2fbd01b617bc,DISK]] 2023-07-21 11:17:39,334 INFO [RS:0;jenkins-hbase17:34379] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/WALs/jenkins-hbase17.apache.org,34379,1689938258169/jenkins-hbase17.apache.org%2C34379%2C1689938258169.1689938259293 2023-07-21 11:17:39,335 DEBUG [RS:0;jenkins-hbase17:34379] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44343,DS-52662990-f118-4ed9-aadc-56f121229758,DISK], DatanodeInfoWithStorage[127.0.0.1:45531,DS-713925bb-3495-4800-b577-d82ab9b166e8,DISK], DatanodeInfoWithStorage[127.0.0.1:36029,DS-c1cd11e1-3262-4c43-b439-2fbd01b617bc,DISK]] 2023-07-21 11:17:39,344 INFO [RS:2;jenkins-hbase17:36661] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C36661%2C1689938258653, suffix=, logDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/WALs/jenkins-hbase17.apache.org,36661,1689938258653, archiveDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/oldWALs, maxLogs=32 2023-07-21 11:17:39,376 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45531,DS-713925bb-3495-4800-b577-d82ab9b166e8,DISK] 2023-07-21 11:17:39,376 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36029,DS-c1cd11e1-3262-4c43-b439-2fbd01b617bc,DISK] 2023-07-21 11:17:39,378 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44343,DS-52662990-f118-4ed9-aadc-56f121229758,DISK] 2023-07-21 11:17:39,378 DEBUG [jenkins-hbase17:45117] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 11:17:39,379 DEBUG [jenkins-hbase17:45117] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:39,379 DEBUG [jenkins-hbase17:45117] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:39,379 DEBUG [jenkins-hbase17:45117] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:39,379 DEBUG [jenkins-hbase17:45117] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:39,379 DEBUG [jenkins-hbase17:45117] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:39,384 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,34379,1689938258169, state=OPENING 2023-07-21 11:17:39,384 INFO [RS:2;jenkins-hbase17:36661] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/WALs/jenkins-hbase17.apache.org,36661,1689938258653/jenkins-hbase17.apache.org%2C36661%2C1689938258653.1689938259345 2023-07-21 11:17:39,385 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 11:17:39,386 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:39,386 DEBUG [RS:2;jenkins-hbase17:36661] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36029,DS-c1cd11e1-3262-4c43-b439-2fbd01b617bc,DISK], DatanodeInfoWithStorage[127.0.0.1:45531,DS-713925bb-3495-4800-b577-d82ab9b166e8,DISK], DatanodeInfoWithStorage[127.0.0.1:44343,DS-52662990-f118-4ed9-aadc-56f121229758,DISK]] 2023-07-21 11:17:39,386 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34379,1689938258169}] 2023-07-21 11:17:39,386 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:17:39,541 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:39,541 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:39,543 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:45596, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:39,548 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 11:17:39,548 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:39,550 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C34379%2C1689938258169.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/WALs/jenkins-hbase17.apache.org,34379,1689938258169, archiveDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/oldWALs, maxLogs=32 2023-07-21 11:17:39,567 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45531,DS-713925bb-3495-4800-b577-d82ab9b166e8,DISK] 2023-07-21 11:17:39,567 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44343,DS-52662990-f118-4ed9-aadc-56f121229758,DISK] 2023-07-21 11:17:39,575 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36029,DS-c1cd11e1-3262-4c43-b439-2fbd01b617bc,DISK] 2023-07-21 11:17:39,582 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/WALs/jenkins-hbase17.apache.org,34379,1689938258169/jenkins-hbase17.apache.org%2C34379%2C1689938258169.meta.1689938259550.meta 2023-07-21 11:17:39,582 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45531,DS-713925bb-3495-4800-b577-d82ab9b166e8,DISK], DatanodeInfoWithStorage[127.0.0.1:44343,DS-52662990-f118-4ed9-aadc-56f121229758,DISK], DatanodeInfoWithStorage[127.0.0.1:36029,DS-c1cd11e1-3262-4c43-b439-2fbd01b617bc,DISK]] 2023-07-21 11:17:39,582 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:39,583 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:17:39,583 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 11:17:39,583 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 11:17:39,583 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 11:17:39,583 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:39,583 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 11:17:39,583 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 11:17:39,587 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 11:17:39,588 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/info 2023-07-21 11:17:39,588 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/info 2023-07-21 11:17:39,588 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 11:17:39,589 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:39,589 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 11:17:39,590 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:17:39,590 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:17:39,590 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 11:17:39,591 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:39,591 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 11:17:39,592 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/table 2023-07-21 11:17:39,592 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/table 2023-07-21 11:17:39,592 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 11:17:39,593 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:39,594 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740 2023-07-21 11:17:39,595 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740 2023-07-21 11:17:39,597 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 11:17:39,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 11:17:39,600 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9409617280, jitterRate=-0.1236611008644104}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 11:17:39,600 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 11:17:39,600 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689938259540 2023-07-21 11:17:39,606 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 11:17:39,606 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 11:17:39,607 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,34379,1689938258169, state=OPEN 2023-07-21 11:17:39,608 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:17:39,608 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:17:39,609 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 11:17:39,609 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34379,1689938258169 in 222 msec 2023-07-21 11:17:39,610 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 11:17:39,610 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 384 msec 2023-07-21 11:17:39,612 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 630 msec 2023-07-21 11:17:39,612 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689938259612, completionTime=-1 2023-07-21 11:17:39,612 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 11:17:39,612 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 11:17:39,615 DEBUG [hconnection-0x3029b5b6-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:39,618 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:45598, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:39,620 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 11:17:39,620 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689938319620 2023-07-21 11:17:39,620 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689938379620 2023-07-21 11:17:39,620 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 8 msec 2023-07-21 11:17:39,625 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,45117,1689938257866-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,625 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,45117,1689938257866-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,625 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,45117,1689938257866-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,625 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:45117, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,625 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,625 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 11:17:39,625 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:39,626 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 11:17:39,627 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 11:17:39,628 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:39,629 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:39,630 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/hbase/namespace/4ae3a8653d7f5d415a8685b0dcb6cad3 2023-07-21 11:17:39,630 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/hbase/namespace/4ae3a8653d7f5d415a8685b0dcb6cad3 empty. 2023-07-21 11:17:39,631 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/hbase/namespace/4ae3a8653d7f5d415a8685b0dcb6cad3 2023-07-21 11:17:39,631 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 11:17:39,636 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,45117,1689938257866] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:39,638 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,45117,1689938257866] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 11:17:39,640 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:39,641 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:39,643 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/hbase/rsgroup/fd2ae591ad172e4900fc6f975fbd95e9 2023-07-21 11:17:39,644 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/hbase/rsgroup/fd2ae591ad172e4900fc6f975fbd95e9 empty. 2023-07-21 11:17:39,644 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/hbase/rsgroup/fd2ae591ad172e4900fc6f975fbd95e9 2023-07-21 11:17:39,644 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 11:17:39,656 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:39,661 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4ae3a8653d7f5d415a8685b0dcb6cad3, NAME => 'hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp 2023-07-21 11:17:39,662 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:39,663 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => fd2ae591ad172e4900fc6f975fbd95e9, NAME => 'hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp 2023-07-21 11:17:39,675 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:39,676 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 4ae3a8653d7f5d415a8685b0dcb6cad3, disabling compactions & flushes 2023-07-21 11:17:39,676 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. 2023-07-21 11:17:39,676 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. 2023-07-21 11:17:39,676 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. after waiting 0 ms 2023-07-21 11:17:39,676 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. 2023-07-21 11:17:39,676 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. 2023-07-21 11:17:39,676 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 4ae3a8653d7f5d415a8685b0dcb6cad3: 2023-07-21 11:17:39,679 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:39,680 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:39,680 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing fd2ae591ad172e4900fc6f975fbd95e9, disabling compactions & flushes 2023-07-21 11:17:39,680 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938259680"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938259680"}]},"ts":"1689938259680"} 2023-07-21 11:17:39,680 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. 2023-07-21 11:17:39,680 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. 2023-07-21 11:17:39,680 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. after waiting 0 ms 2023-07-21 11:17:39,680 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. 2023-07-21 11:17:39,680 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. 2023-07-21 11:17:39,680 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for fd2ae591ad172e4900fc6f975fbd95e9: 2023-07-21 11:17:39,682 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:39,683 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:39,683 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:39,683 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938259683"}]},"ts":"1689938259683"} 2023-07-21 11:17:39,684 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938259684"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938259684"}]},"ts":"1689938259684"} 2023-07-21 11:17:39,684 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 11:17:39,685 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:39,685 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:39,686 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938259686"}]},"ts":"1689938259686"} 2023-07-21 11:17:39,687 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:39,687 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:39,687 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:39,687 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:39,687 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 11:17:39,687 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:39,687 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4ae3a8653d7f5d415a8685b0dcb6cad3, ASSIGN}] 2023-07-21 11:17:39,688 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4ae3a8653d7f5d415a8685b0dcb6cad3, ASSIGN 2023-07-21 11:17:39,689 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=4ae3a8653d7f5d415a8685b0dcb6cad3, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36661,1689938258653; forceNewPlan=false, retain=false 2023-07-21 11:17:39,689 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:39,689 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:39,689 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:39,689 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:39,689 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:39,689 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=fd2ae591ad172e4900fc6f975fbd95e9, ASSIGN}] 2023-07-21 11:17:39,691 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=fd2ae591ad172e4900fc6f975fbd95e9, ASSIGN 2023-07-21 11:17:39,692 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=fd2ae591ad172e4900fc6f975fbd95e9, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36661,1689938258653; forceNewPlan=false, retain=false 2023-07-21 11:17:39,693 INFO [jenkins-hbase17:45117] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-21 11:17:39,695 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=4ae3a8653d7f5d415a8685b0dcb6cad3, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:39,695 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=fd2ae591ad172e4900fc6f975fbd95e9, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:39,695 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938259695"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938259695"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938259695"}]},"ts":"1689938259695"} 2023-07-21 11:17:39,695 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938259694"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938259694"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938259694"}]},"ts":"1689938259694"} 2023-07-21 11:17:39,698 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure fd2ae591ad172e4900fc6f975fbd95e9, server=jenkins-hbase17.apache.org,36661,1689938258653}] 2023-07-21 11:17:39,699 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure 4ae3a8653d7f5d415a8685b0dcb6cad3, server=jenkins-hbase17.apache.org,36661,1689938258653}] 2023-07-21 11:17:39,850 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:39,851 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:39,853 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:59328, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:39,858 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. 2023-07-21 11:17:39,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fd2ae591ad172e4900fc6f975fbd95e9, NAME => 'hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:39,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:17:39,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. service=MultiRowMutationService 2023-07-21 11:17:39,858 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 11:17:39,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup fd2ae591ad172e4900fc6f975fbd95e9 2023-07-21 11:17:39,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:39,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for fd2ae591ad172e4900fc6f975fbd95e9 2023-07-21 11:17:39,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for fd2ae591ad172e4900fc6f975fbd95e9 2023-07-21 11:17:39,860 INFO [StoreOpener-fd2ae591ad172e4900fc6f975fbd95e9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region fd2ae591ad172e4900fc6f975fbd95e9 2023-07-21 11:17:39,861 DEBUG [StoreOpener-fd2ae591ad172e4900fc6f975fbd95e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/rsgroup/fd2ae591ad172e4900fc6f975fbd95e9/m 2023-07-21 11:17:39,861 DEBUG [StoreOpener-fd2ae591ad172e4900fc6f975fbd95e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/rsgroup/fd2ae591ad172e4900fc6f975fbd95e9/m 2023-07-21 11:17:39,861 INFO [StoreOpener-fd2ae591ad172e4900fc6f975fbd95e9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fd2ae591ad172e4900fc6f975fbd95e9 columnFamilyName m 2023-07-21 11:17:39,862 INFO [StoreOpener-fd2ae591ad172e4900fc6f975fbd95e9-1] regionserver.HStore(310): Store=fd2ae591ad172e4900fc6f975fbd95e9/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:39,863 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/rsgroup/fd2ae591ad172e4900fc6f975fbd95e9 2023-07-21 11:17:39,863 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/rsgroup/fd2ae591ad172e4900fc6f975fbd95e9 2023-07-21 11:17:39,866 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for fd2ae591ad172e4900fc6f975fbd95e9 2023-07-21 11:17:39,868 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/rsgroup/fd2ae591ad172e4900fc6f975fbd95e9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:39,869 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened fd2ae591ad172e4900fc6f975fbd95e9; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@61bbc8ac, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:39,869 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for fd2ae591ad172e4900fc6f975fbd95e9: 2023-07-21 11:17:39,870 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9., pid=8, masterSystemTime=1689938259850 2023-07-21 11:17:39,873 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. 2023-07-21 11:17:39,874 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. 2023-07-21 11:17:39,874 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. 2023-07-21 11:17:39,874 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4ae3a8653d7f5d415a8685b0dcb6cad3, NAME => 'hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:39,874 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=fd2ae591ad172e4900fc6f975fbd95e9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:39,874 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938259874"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938259874"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938259874"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938259874"}]},"ts":"1689938259874"} 2023-07-21 11:17:39,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 4ae3a8653d7f5d415a8685b0dcb6cad3 2023-07-21 11:17:39,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:39,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 4ae3a8653d7f5d415a8685b0dcb6cad3 2023-07-21 11:17:39,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 4ae3a8653d7f5d415a8685b0dcb6cad3 2023-07-21 11:17:39,876 INFO [StoreOpener-4ae3a8653d7f5d415a8685b0dcb6cad3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4ae3a8653d7f5d415a8685b0dcb6cad3 2023-07-21 11:17:39,877 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-21 11:17:39,878 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure fd2ae591ad172e4900fc6f975fbd95e9, server=jenkins-hbase17.apache.org,36661,1689938258653 in 180 msec 2023-07-21 11:17:39,878 DEBUG [StoreOpener-4ae3a8653d7f5d415a8685b0dcb6cad3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/namespace/4ae3a8653d7f5d415a8685b0dcb6cad3/info 2023-07-21 11:17:39,878 DEBUG [StoreOpener-4ae3a8653d7f5d415a8685b0dcb6cad3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/namespace/4ae3a8653d7f5d415a8685b0dcb6cad3/info 2023-07-21 11:17:39,878 INFO [StoreOpener-4ae3a8653d7f5d415a8685b0dcb6cad3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4ae3a8653d7f5d415a8685b0dcb6cad3 columnFamilyName info 2023-07-21 11:17:39,879 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-21 11:17:39,879 INFO [StoreOpener-4ae3a8653d7f5d415a8685b0dcb6cad3-1] regionserver.HStore(310): Store=4ae3a8653d7f5d415a8685b0dcb6cad3/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:39,879 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=fd2ae591ad172e4900fc6f975fbd95e9, ASSIGN in 189 msec 2023-07-21 11:17:39,880 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/namespace/4ae3a8653d7f5d415a8685b0dcb6cad3 2023-07-21 11:17:39,881 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/namespace/4ae3a8653d7f5d415a8685b0dcb6cad3 2023-07-21 11:17:39,882 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:39,882 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938259882"}]},"ts":"1689938259882"} 2023-07-21 11:17:39,883 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 4ae3a8653d7f5d415a8685b0dcb6cad3 2023-07-21 11:17:39,884 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 11:17:39,886 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:39,886 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/namespace/4ae3a8653d7f5d415a8685b0dcb6cad3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:39,887 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 4ae3a8653d7f5d415a8685b0dcb6cad3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9952967360, jitterRate=-0.07305768132209778}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:39,887 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 4ae3a8653d7f5d415a8685b0dcb6cad3: 2023-07-21 11:17:39,888 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3., pid=9, masterSystemTime=1689938259850 2023-07-21 11:17:39,888 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 250 msec 2023-07-21 11:17:39,889 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. 2023-07-21 11:17:39,889 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. 2023-07-21 11:17:39,889 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=4ae3a8653d7f5d415a8685b0dcb6cad3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:39,890 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938259889"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938259889"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938259889"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938259889"}]},"ts":"1689938259889"} 2023-07-21 11:17:39,893 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-21 11:17:39,893 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure 4ae3a8653d7f5d415a8685b0dcb6cad3, server=jenkins-hbase17.apache.org,36661,1689938258653 in 192 msec 2023-07-21 11:17:39,894 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-21 11:17:39,894 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=4ae3a8653d7f5d415a8685b0dcb6cad3, ASSIGN in 206 msec 2023-07-21 11:17:39,895 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:39,895 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938259895"}]},"ts":"1689938259895"} 2023-07-21 11:17:39,896 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 11:17:39,898 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:39,899 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 273 msec 2023-07-21 11:17:39,927 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 11:17:39,928 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:17:39,928 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:39,932 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:39,933 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:59338, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:39,939 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 11:17:39,945 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,45117,1689938257866] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 11:17:39,945 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,45117,1689938257866] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 11:17:39,951 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:17:39,953 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:39,953 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,45117,1689938257866] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:39,954 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,45117,1689938257866] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 11:17:39,954 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 14 msec 2023-07-21 11:17:39,955 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,45117,1689938257866] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 11:17:39,961 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 11:17:39,968 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:17:39,970 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-21 11:17:39,974 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 11:17:39,975 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 11:17:39,975 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.177sec 2023-07-21 11:17:39,976 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-21 11:17:39,976 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:39,977 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-21 11:17:39,977 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-21 11:17:39,979 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:39,980 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:39,981 INFO [master/jenkins-hbase17:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-21 11:17:39,981 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/hbase/quota/8bfecc2c9b885a59dfca50d0a65d1abb 2023-07-21 11:17:39,982 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/hbase/quota/8bfecc2c9b885a59dfca50d0a65d1abb empty. 2023-07-21 11:17:39,982 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/hbase/quota/8bfecc2c9b885a59dfca50d0a65d1abb 2023-07-21 11:17:39,982 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-21 11:17:39,986 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ReadOnlyZKClient(139): Connect 0x562e5e9a to 127.0.0.1:62351 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:39,994 INFO [master/jenkins-hbase17:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-21 11:17:39,994 INFO [master/jenkins-hbase17:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-21 11:17:39,997 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,997 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:39,997 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 11:17:39,997 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 11:17:39,997 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,45117,1689938257866-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 11:17:39,998 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,45117,1689938257866-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 11:17:40,000 DEBUG [Listener at localhost.localdomain/34273] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@46d138ec, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:40,004 DEBUG [hconnection-0x44e369f8-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:40,005 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 11:17:40,009 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:45610, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:40,009 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:40,010 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8bfecc2c9b885a59dfca50d0a65d1abb, NAME => 'hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp 2023-07-21 11:17:40,014 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,45117,1689938257866 2023-07-21 11:17:40,014 INFO [Listener at localhost.localdomain/34273] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:40,021 DEBUG [Listener at localhost.localdomain/34273] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 11:17:40,024 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:47670, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 11:17:40,026 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:40,026 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 8bfecc2c9b885a59dfca50d0a65d1abb, disabling compactions & flushes 2023-07-21 11:17:40,026 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. 2023-07-21 11:17:40,026 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. 2023-07-21 11:17:40,026 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. after waiting 0 ms 2023-07-21 11:17:40,026 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. 2023-07-21 11:17:40,026 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. 2023-07-21 11:17:40,026 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 8bfecc2c9b885a59dfca50d0a65d1abb: 2023-07-21 11:17:40,029 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 11:17:40,029 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:40,030 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:40,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=false 2023-07-21 11:17:40,030 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689938260030"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938260030"}]},"ts":"1689938260030"} 2023-07-21 11:17:40,031 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ReadOnlyZKClient(139): Connect 0x30086fa6 to 127.0.0.1:62351 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:40,033 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:40,037 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:40,037 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938260037"}]},"ts":"1689938260037"} 2023-07-21 11:17:40,038 DEBUG [Listener at localhost.localdomain/34273] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ffb6cf6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:40,039 INFO [Listener at localhost.localdomain/34273] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:62351 2023-07-21 11:17:40,040 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-21 11:17:40,042 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:40,043 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:40,043 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:40,043 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:40,043 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:40,043 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=8bfecc2c9b885a59dfca50d0a65d1abb, ASSIGN}] 2023-07-21 11:17:40,045 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=8bfecc2c9b885a59dfca50d0a65d1abb, ASSIGN 2023-07-21 11:17:40,045 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=8bfecc2c9b885a59dfca50d0a65d1abb, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,43393,1689938258412; forceNewPlan=false, retain=false 2023-07-21 11:17:40,051 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:40,056 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1018798e2b6000a connected 2023-07-21 11:17:40,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.HMaster$15(3014): Client=jenkins//136.243.18.41 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-21 11:17:40,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-21 11:17:40,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-21 11:17:40,068 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:17:40,070 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 12 msec 2023-07-21 11:17:40,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-21 11:17:40,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:40,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-21 11:17:40,174 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:40,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-21 11:17:40,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 11:17:40,176 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:40,176 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 11:17:40,178 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:40,179 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/np1/table1/a656438d2f116ec7b324aede7b82aa59 2023-07-21 11:17:40,180 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/np1/table1/a656438d2f116ec7b324aede7b82aa59 empty. 2023-07-21 11:17:40,181 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/np1/table1/a656438d2f116ec7b324aede7b82aa59 2023-07-21 11:17:40,181 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-21 11:17:40,191 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:40,192 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => a656438d2f116ec7b324aede7b82aa59, NAME => 'np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp 2023-07-21 11:17:40,196 INFO [jenkins-hbase17:45117] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:40,197 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=8bfecc2c9b885a59dfca50d0a65d1abb, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:40,197 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689938260197"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938260197"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938260197"}]},"ts":"1689938260197"} 2023-07-21 11:17:40,198 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 8bfecc2c9b885a59dfca50d0a65d1abb, server=jenkins-hbase17.apache.org,43393,1689938258412}] 2023-07-21 11:17:40,204 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:40,205 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing a656438d2f116ec7b324aede7b82aa59, disabling compactions & flushes 2023-07-21 11:17:40,205 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59. 2023-07-21 11:17:40,205 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59. 2023-07-21 11:17:40,205 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59. after waiting 0 ms 2023-07-21 11:17:40,205 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59. 2023-07-21 11:17:40,205 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59. 2023-07-21 11:17:40,205 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for a656438d2f116ec7b324aede7b82aa59: 2023-07-21 11:17:40,207 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:40,208 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689938260208"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938260208"}]},"ts":"1689938260208"} 2023-07-21 11:17:40,210 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:40,211 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:40,211 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938260211"}]},"ts":"1689938260211"} 2023-07-21 11:17:40,213 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-21 11:17:40,218 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:40,218 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:40,218 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:40,218 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:40,219 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:40,219 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=a656438d2f116ec7b324aede7b82aa59, ASSIGN}] 2023-07-21 11:17:40,221 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=a656438d2f116ec7b324aede7b82aa59, ASSIGN 2023-07-21 11:17:40,222 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=a656438d2f116ec7b324aede7b82aa59, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,34379,1689938258169; forceNewPlan=false, retain=false 2023-07-21 11:17:40,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 11:17:40,351 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:40,351 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:40,354 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38210, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:40,358 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. 2023-07-21 11:17:40,359 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8bfecc2c9b885a59dfca50d0a65d1abb, NAME => 'hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:40,359 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 8bfecc2c9b885a59dfca50d0a65d1abb 2023-07-21 11:17:40,359 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:40,359 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 8bfecc2c9b885a59dfca50d0a65d1abb 2023-07-21 11:17:40,359 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 8bfecc2c9b885a59dfca50d0a65d1abb 2023-07-21 11:17:40,360 INFO [StoreOpener-8bfecc2c9b885a59dfca50d0a65d1abb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 8bfecc2c9b885a59dfca50d0a65d1abb 2023-07-21 11:17:40,362 DEBUG [StoreOpener-8bfecc2c9b885a59dfca50d0a65d1abb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/quota/8bfecc2c9b885a59dfca50d0a65d1abb/q 2023-07-21 11:17:40,362 DEBUG [StoreOpener-8bfecc2c9b885a59dfca50d0a65d1abb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/quota/8bfecc2c9b885a59dfca50d0a65d1abb/q 2023-07-21 11:17:40,362 INFO [StoreOpener-8bfecc2c9b885a59dfca50d0a65d1abb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8bfecc2c9b885a59dfca50d0a65d1abb columnFamilyName q 2023-07-21 11:17:40,363 INFO [StoreOpener-8bfecc2c9b885a59dfca50d0a65d1abb-1] regionserver.HStore(310): Store=8bfecc2c9b885a59dfca50d0a65d1abb/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:40,363 INFO [StoreOpener-8bfecc2c9b885a59dfca50d0a65d1abb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 8bfecc2c9b885a59dfca50d0a65d1abb 2023-07-21 11:17:40,364 DEBUG [StoreOpener-8bfecc2c9b885a59dfca50d0a65d1abb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/quota/8bfecc2c9b885a59dfca50d0a65d1abb/u 2023-07-21 11:17:40,364 DEBUG [StoreOpener-8bfecc2c9b885a59dfca50d0a65d1abb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/quota/8bfecc2c9b885a59dfca50d0a65d1abb/u 2023-07-21 11:17:40,365 INFO [StoreOpener-8bfecc2c9b885a59dfca50d0a65d1abb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8bfecc2c9b885a59dfca50d0a65d1abb columnFamilyName u 2023-07-21 11:17:40,366 INFO [StoreOpener-8bfecc2c9b885a59dfca50d0a65d1abb-1] regionserver.HStore(310): Store=8bfecc2c9b885a59dfca50d0a65d1abb/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:40,366 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/quota/8bfecc2c9b885a59dfca50d0a65d1abb 2023-07-21 11:17:40,367 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/quota/8bfecc2c9b885a59dfca50d0a65d1abb 2023-07-21 11:17:40,369 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-21 11:17:40,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 8bfecc2c9b885a59dfca50d0a65d1abb 2023-07-21 11:17:40,373 INFO [jenkins-hbase17:45117] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:40,374 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/quota/8bfecc2c9b885a59dfca50d0a65d1abb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:40,374 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=a656438d2f116ec7b324aede7b82aa59, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:40,374 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689938260374"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938260374"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938260374"}]},"ts":"1689938260374"} 2023-07-21 11:17:40,374 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 8bfecc2c9b885a59dfca50d0a65d1abb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11505194720, jitterRate=0.07150475680828094}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-21 11:17:40,374 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 8bfecc2c9b885a59dfca50d0a65d1abb: 2023-07-21 11:17:40,375 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb., pid=16, masterSystemTime=1689938260351 2023-07-21 11:17:40,378 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure a656438d2f116ec7b324aede7b82aa59, server=jenkins-hbase17.apache.org,34379,1689938258169}] 2023-07-21 11:17:40,379 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. 2023-07-21 11:17:40,380 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=8bfecc2c9b885a59dfca50d0a65d1abb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:40,380 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689938260379"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938260379"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938260379"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938260379"}]},"ts":"1689938260379"} 2023-07-21 11:17:40,380 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. 2023-07-21 11:17:40,383 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-21 11:17:40,383 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 8bfecc2c9b885a59dfca50d0a65d1abb, server=jenkins-hbase17.apache.org,43393,1689938258412 in 184 msec 2023-07-21 11:17:40,384 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 11:17:40,385 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=8bfecc2c9b885a59dfca50d0a65d1abb, ASSIGN in 340 msec 2023-07-21 11:17:40,386 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:40,386 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938260386"}]},"ts":"1689938260386"} 2023-07-21 11:17:40,388 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-21 11:17:40,393 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:40,407 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 417 msec 2023-07-21 11:17:40,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 11:17:40,534 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59. 2023-07-21 11:17:40,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a656438d2f116ec7b324aede7b82aa59, NAME => 'np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:40,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 a656438d2f116ec7b324aede7b82aa59 2023-07-21 11:17:40,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:40,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for a656438d2f116ec7b324aede7b82aa59 2023-07-21 11:17:40,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for a656438d2f116ec7b324aede7b82aa59 2023-07-21 11:17:40,536 INFO [StoreOpener-a656438d2f116ec7b324aede7b82aa59-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region a656438d2f116ec7b324aede7b82aa59 2023-07-21 11:17:40,537 DEBUG [StoreOpener-a656438d2f116ec7b324aede7b82aa59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/np1/table1/a656438d2f116ec7b324aede7b82aa59/fam1 2023-07-21 11:17:40,537 DEBUG [StoreOpener-a656438d2f116ec7b324aede7b82aa59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/np1/table1/a656438d2f116ec7b324aede7b82aa59/fam1 2023-07-21 11:17:40,537 INFO [StoreOpener-a656438d2f116ec7b324aede7b82aa59-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a656438d2f116ec7b324aede7b82aa59 columnFamilyName fam1 2023-07-21 11:17:40,538 INFO [StoreOpener-a656438d2f116ec7b324aede7b82aa59-1] regionserver.HStore(310): Store=a656438d2f116ec7b324aede7b82aa59/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:40,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/np1/table1/a656438d2f116ec7b324aede7b82aa59 2023-07-21 11:17:40,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/np1/table1/a656438d2f116ec7b324aede7b82aa59 2023-07-21 11:17:40,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for a656438d2f116ec7b324aede7b82aa59 2023-07-21 11:17:40,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/np1/table1/a656438d2f116ec7b324aede7b82aa59/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:40,544 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened a656438d2f116ec7b324aede7b82aa59; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9506129760, jitterRate=-0.11467267572879791}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:40,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for a656438d2f116ec7b324aede7b82aa59: 2023-07-21 11:17:40,544 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59., pid=18, masterSystemTime=1689938260529 2023-07-21 11:17:40,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59. 2023-07-21 11:17:40,546 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59. 2023-07-21 11:17:40,547 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=a656438d2f116ec7b324aede7b82aa59, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:40,547 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689938260547"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938260547"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938260547"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938260547"}]},"ts":"1689938260547"} 2023-07-21 11:17:40,554 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-21 11:17:40,555 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure a656438d2f116ec7b324aede7b82aa59, server=jenkins-hbase17.apache.org,34379,1689938258169 in 175 msec 2023-07-21 11:17:40,556 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-21 11:17:40,556 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=a656438d2f116ec7b324aede7b82aa59, ASSIGN in 335 msec 2023-07-21 11:17:40,556 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:40,556 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938260556"}]},"ts":"1689938260556"} 2023-07-21 11:17:40,558 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-21 11:17:40,559 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:40,561 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 388 msec 2023-07-21 11:17:40,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 11:17:40,779 INFO [Listener at localhost.localdomain/34273] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-21 11:17:40,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:40,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-21 11:17:40,783 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:40,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-21 11:17:40,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 11:17:40,808 DEBUG [PEWorker-5] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:40,811 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38216, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:40,815 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=34 msec 2023-07-21 11:17:40,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 11:17:40,887 INFO [Listener at localhost.localdomain/34273] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-21 11:17:40,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:40,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:40,890 INFO [Listener at localhost.localdomain/34273] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-21 11:17:40,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable np1:table1 2023-07-21 11:17:40,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-21 11:17:40,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 11:17:40,894 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938260894"}]},"ts":"1689938260894"} 2023-07-21 11:17:40,895 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-21 11:17:40,896 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-21 11:17:40,897 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=a656438d2f116ec7b324aede7b82aa59, UNASSIGN}] 2023-07-21 11:17:40,897 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=a656438d2f116ec7b324aede7b82aa59, UNASSIGN 2023-07-21 11:17:40,898 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=a656438d2f116ec7b324aede7b82aa59, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:40,898 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689938260898"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938260898"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938260898"}]},"ts":"1689938260898"} 2023-07-21 11:17:40,899 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure a656438d2f116ec7b324aede7b82aa59, server=jenkins-hbase17.apache.org,34379,1689938258169}] 2023-07-21 11:17:40,907 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-21 11:17:40,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 11:17:41,051 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close a656438d2f116ec7b324aede7b82aa59 2023-07-21 11:17:41,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing a656438d2f116ec7b324aede7b82aa59, disabling compactions & flushes 2023-07-21 11:17:41,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59. 2023-07-21 11:17:41,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59. 2023-07-21 11:17:41,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59. after waiting 0 ms 2023-07-21 11:17:41,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59. 2023-07-21 11:17:41,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/np1/table1/a656438d2f116ec7b324aede7b82aa59/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:41,057 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59. 2023-07-21 11:17:41,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for a656438d2f116ec7b324aede7b82aa59: 2023-07-21 11:17:41,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed a656438d2f116ec7b324aede7b82aa59 2023-07-21 11:17:41,059 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=a656438d2f116ec7b324aede7b82aa59, regionState=CLOSED 2023-07-21 11:17:41,059 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689938261059"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938261059"}]},"ts":"1689938261059"} 2023-07-21 11:17:41,061 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-21 11:17:41,062 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure a656438d2f116ec7b324aede7b82aa59, server=jenkins-hbase17.apache.org,34379,1689938258169 in 161 msec 2023-07-21 11:17:41,063 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-21 11:17:41,063 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=a656438d2f116ec7b324aede7b82aa59, UNASSIGN in 164 msec 2023-07-21 11:17:41,063 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938261063"}]},"ts":"1689938261063"} 2023-07-21 11:17:41,064 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-21 11:17:41,065 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-21 11:17:41,067 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 175 msec 2023-07-21 11:17:41,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 11:17:41,196 INFO [Listener at localhost.localdomain/34273] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-21 11:17:41,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete np1:table1 2023-07-21 11:17:41,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-21 11:17:41,200 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 11:17:41,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-21 11:17:41,200 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 11:17:41,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:41,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 11:17:41,203 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/np1/table1/a656438d2f116ec7b324aede7b82aa59 2023-07-21 11:17:41,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 11:17:41,205 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/np1/table1/a656438d2f116ec7b324aede7b82aa59/fam1, FileablePath, hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/np1/table1/a656438d2f116ec7b324aede7b82aa59/recovered.edits] 2023-07-21 11:17:41,209 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/np1/table1/a656438d2f116ec7b324aede7b82aa59/recovered.edits/4.seqid to hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/archive/data/np1/table1/a656438d2f116ec7b324aede7b82aa59/recovered.edits/4.seqid 2023-07-21 11:17:41,210 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/.tmp/data/np1/table1/a656438d2f116ec7b324aede7b82aa59 2023-07-21 11:17:41,210 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-21 11:17:41,212 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 11:17:41,214 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-21 11:17:41,216 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-21 11:17:41,218 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 11:17:41,218 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-21 11:17:41,218 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938261218"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:41,219 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 11:17:41,219 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a656438d2f116ec7b324aede7b82aa59, NAME => 'np1:table1,,1689938260170.a656438d2f116ec7b324aede7b82aa59.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 11:17:41,219 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-21 11:17:41,220 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938261220"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:41,221 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-21 11:17:41,224 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 11:17:41,226 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 27 msec 2023-07-21 11:17:41,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 11:17:41,306 INFO [Listener at localhost.localdomain/34273] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-21 11:17:41,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.HMaster$17(3086): Client=jenkins//136.243.18.41 delete np1 2023-07-21 11:17:41,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-21 11:17:41,318 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 11:17:41,321 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 11:17:41,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 11:17:41,323 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 11:17:41,324 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-21 11:17:41,324 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:17:41,325 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 11:17:41,327 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 11:17:41,328 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 17 msec 2023-07-21 11:17:41,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45117] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 11:17:41,425 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 11:17:41,425 INFO [Listener at localhost.localdomain/34273] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 11:17:41,425 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x562e5e9a to 127.0.0.1:62351 2023-07-21 11:17:41,425 DEBUG [Listener at localhost.localdomain/34273] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:41,425 DEBUG [Listener at localhost.localdomain/34273] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 11:17:41,425 DEBUG [Listener at localhost.localdomain/34273] util.JVMClusterUtil(257): Found active master hash=2060049703, stopped=false 2023-07-21 11:17:41,425 DEBUG [Listener at localhost.localdomain/34273] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 11:17:41,425 DEBUG [Listener at localhost.localdomain/34273] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 11:17:41,425 DEBUG [Listener at localhost.localdomain/34273] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-21 11:17:41,425 INFO [Listener at localhost.localdomain/34273] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,45117,1689938257866 2023-07-21 11:17:41,426 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:41,426 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:41,426 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:41,427 INFO [Listener at localhost.localdomain/34273] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 11:17:41,426 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:41,426 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:41,428 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:41,428 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:41,428 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:41,429 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:41,429 DEBUG [Listener at localhost.localdomain/34273] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1cc486a5 to 127.0.0.1:62351 2023-07-21 11:17:41,429 DEBUG [Listener at localhost.localdomain/34273] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:41,429 INFO [Listener at localhost.localdomain/34273] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,34379,1689938258169' ***** 2023-07-21 11:17:41,429 INFO [Listener at localhost.localdomain/34273] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:17:41,429 INFO [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:41,430 INFO [Listener at localhost.localdomain/34273] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,43393,1689938258412' ***** 2023-07-21 11:17:41,430 INFO [Listener at localhost.localdomain/34273] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:17:41,430 INFO [Listener at localhost.localdomain/34273] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,36661,1689938258653' ***** 2023-07-21 11:17:41,431 INFO [Listener at localhost.localdomain/34273] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:17:41,431 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:41,430 INFO [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:41,434 INFO [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:41,438 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:17:41,444 INFO [RS:0;jenkins-hbase17:34379] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@63d31eea{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:41,445 INFO [RS:2;jenkins-hbase17:36661] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6a4ea3be{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:41,445 INFO [RS:1;jenkins-hbase17:43393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4beeb415{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:41,445 INFO [RS:0;jenkins-hbase17:34379] server.AbstractConnector(383): Stopped ServerConnector@4320000f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:41,445 INFO [RS:0;jenkins-hbase17:34379] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:41,445 INFO [RS:1;jenkins-hbase17:43393] server.AbstractConnector(383): Stopped ServerConnector@786f5134{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:41,445 INFO [RS:2;jenkins-hbase17:36661] server.AbstractConnector(383): Stopped ServerConnector@694a562a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:41,445 INFO [RS:1;jenkins-hbase17:43393] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:41,446 INFO [RS:0;jenkins-hbase17:34379] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@46d1080d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:41,446 INFO [RS:2;jenkins-hbase17:36661] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:41,448 INFO [RS:1;jenkins-hbase17:43393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7af00567{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:41,448 INFO [RS:0;jenkins-hbase17:34379] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@266b9be0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:41,448 INFO [RS:1;jenkins-hbase17:43393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@c739b3a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:41,448 INFO [RS:2;jenkins-hbase17:36661] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@29d6565a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:41,449 INFO [RS:2;jenkins-hbase17:36661] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d868100{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:41,449 INFO [RS:0;jenkins-hbase17:34379] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:17:41,449 INFO [RS:0;jenkins-hbase17:34379] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:17:41,449 INFO [RS:0;jenkins-hbase17:34379] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:17:41,449 INFO [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:41,449 INFO [RS:1;jenkins-hbase17:43393] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:17:41,449 DEBUG [RS:0;jenkins-hbase17:34379] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x099494b4 to 127.0.0.1:62351 2023-07-21 11:17:41,449 INFO [RS:1;jenkins-hbase17:43393] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:17:41,449 INFO [RS:2;jenkins-hbase17:36661] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:17:41,450 INFO [RS:1;jenkins-hbase17:43393] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:17:41,449 DEBUG [RS:0;jenkins-hbase17:34379] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:41,450 INFO [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(3305): Received CLOSE for 8bfecc2c9b885a59dfca50d0a65d1abb 2023-07-21 11:17:41,449 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:17:41,450 INFO [RS:0;jenkins-hbase17:34379] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:17:41,451 INFO [RS:0;jenkins-hbase17:34379] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:17:41,451 INFO [RS:0;jenkins-hbase17:34379] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:17:41,450 INFO [RS:2;jenkins-hbase17:36661] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:17:41,450 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:17:41,451 INFO [RS:2;jenkins-hbase17:36661] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:17:41,451 INFO [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:41,452 INFO [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(3305): Received CLOSE for fd2ae591ad172e4900fc6f975fbd95e9 2023-07-21 11:17:41,451 INFO [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 11:17:41,452 DEBUG [RS:1;jenkins-hbase17:43393] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x19489931 to 127.0.0.1:62351 2023-07-21 11:17:41,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 8bfecc2c9b885a59dfca50d0a65d1abb, disabling compactions & flushes 2023-07-21 11:17:41,452 DEBUG [RS:1;jenkins-hbase17:43393] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:41,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. 2023-07-21 11:17:41,452 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:41,453 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. 2023-07-21 11:17:41,452 INFO [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(3305): Received CLOSE for 4ae3a8653d7f5d415a8685b0dcb6cad3 2023-07-21 11:17:41,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing fd2ae591ad172e4900fc6f975fbd95e9, disabling compactions & flushes 2023-07-21 11:17:41,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. after waiting 1 ms 2023-07-21 11:17:41,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. 2023-07-21 11:17:41,452 INFO [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 11:17:41,454 DEBUG [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(1478): Online Regions={8bfecc2c9b885a59dfca50d0a65d1abb=hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb.} 2023-07-21 11:17:41,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. 2023-07-21 11:17:41,454 DEBUG [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(1504): Waiting on 8bfecc2c9b885a59dfca50d0a65d1abb 2023-07-21 11:17:41,454 INFO [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:41,453 INFO [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 11:17:41,454 DEBUG [RS:2;jenkins-hbase17:36661] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0233e372 to 127.0.0.1:62351 2023-07-21 11:17:41,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. 2023-07-21 11:17:41,454 DEBUG [RS:2;jenkins-hbase17:36661] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:41,454 DEBUG [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-21 11:17:41,454 INFO [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 11:17:41,454 DEBUG [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-21 11:17:41,454 DEBUG [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1478): Online Regions={fd2ae591ad172e4900fc6f975fbd95e9=hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9., 4ae3a8653d7f5d415a8685b0dcb6cad3=hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3.} 2023-07-21 11:17:41,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. after waiting 0 ms 2023-07-21 11:17:41,455 DEBUG [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1504): Waiting on 4ae3a8653d7f5d415a8685b0dcb6cad3, fd2ae591ad172e4900fc6f975fbd95e9 2023-07-21 11:17:41,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. 2023-07-21 11:17:41,455 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing fd2ae591ad172e4900fc6f975fbd95e9 1/1 column families, dataSize=594 B heapSize=1.05 KB 2023-07-21 11:17:41,456 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 11:17:41,456 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 11:17:41,456 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 11:17:41,456 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 11:17:41,456 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 11:17:41,456 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.90 KB heapSize=11.10 KB 2023-07-21 11:17:41,459 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/quota/8bfecc2c9b885a59dfca50d0a65d1abb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:41,460 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. 2023-07-21 11:17:41,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 8bfecc2c9b885a59dfca50d0a65d1abb: 2023-07-21 11:17:41,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689938259976.8bfecc2c9b885a59dfca50d0a65d1abb. 2023-07-21 11:17:41,475 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.27 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/.tmp/info/72cf289479944200b557da5a529262ca 2023-07-21 11:17:41,475 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=594 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/rsgroup/fd2ae591ad172e4900fc6f975fbd95e9/.tmp/m/0776a9e1d4ab43c29229df32940bd44e 2023-07-21 11:17:41,483 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 72cf289479944200b557da5a529262ca 2023-07-21 11:17:41,484 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/rsgroup/fd2ae591ad172e4900fc6f975fbd95e9/.tmp/m/0776a9e1d4ab43c29229df32940bd44e as hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/rsgroup/fd2ae591ad172e4900fc6f975fbd95e9/m/0776a9e1d4ab43c29229df32940bd44e 2023-07-21 11:17:41,490 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/rsgroup/fd2ae591ad172e4900fc6f975fbd95e9/m/0776a9e1d4ab43c29229df32940bd44e, entries=1, sequenceid=7, filesize=4.9 K 2023-07-21 11:17:41,492 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~594 B/594, heapSize ~1.04 KB/1064, currentSize=0 B/0 for fd2ae591ad172e4900fc6f975fbd95e9 in 37ms, sequenceid=7, compaction requested=false 2023-07-21 11:17:41,492 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 11:17:41,497 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/.tmp/rep_barrier/84a9682990ee4d47a8ff9cb10900b525 2023-07-21 11:17:41,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/rsgroup/fd2ae591ad172e4900fc6f975fbd95e9/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-21 11:17:41,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:17:41,502 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. 2023-07-21 11:17:41,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for fd2ae591ad172e4900fc6f975fbd95e9: 2023-07-21 11:17:41,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689938259636.fd2ae591ad172e4900fc6f975fbd95e9. 2023-07-21 11:17:41,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 4ae3a8653d7f5d415a8685b0dcb6cad3, disabling compactions & flushes 2023-07-21 11:17:41,504 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. 2023-07-21 11:17:41,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. 2023-07-21 11:17:41,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. after waiting 0 ms 2023-07-21 11:17:41,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. 2023-07-21 11:17:41,504 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 4ae3a8653d7f5d415a8685b0dcb6cad3 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-21 11:17:41,507 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 84a9682990ee4d47a8ff9cb10900b525 2023-07-21 11:17:41,510 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:41,531 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/.tmp/table/8e126dbb01804acc9fbca5b5d5e86f5e 2023-07-21 11:17:41,537 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8e126dbb01804acc9fbca5b5d5e86f5e 2023-07-21 11:17:41,538 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/.tmp/info/72cf289479944200b557da5a529262ca as hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/info/72cf289479944200b557da5a529262ca 2023-07-21 11:17:41,545 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 72cf289479944200b557da5a529262ca 2023-07-21 11:17:41,545 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/info/72cf289479944200b557da5a529262ca, entries=32, sequenceid=31, filesize=8.5 K 2023-07-21 11:17:41,546 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/.tmp/rep_barrier/84a9682990ee4d47a8ff9cb10900b525 as hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/rep_barrier/84a9682990ee4d47a8ff9cb10900b525 2023-07-21 11:17:41,553 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 84a9682990ee4d47a8ff9cb10900b525 2023-07-21 11:17:41,553 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/rep_barrier/84a9682990ee4d47a8ff9cb10900b525, entries=1, sequenceid=31, filesize=4.9 K 2023-07-21 11:17:41,554 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/.tmp/table/8e126dbb01804acc9fbca5b5d5e86f5e as hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/table/8e126dbb01804acc9fbca5b5d5e86f5e 2023-07-21 11:17:41,564 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8e126dbb01804acc9fbca5b5d5e86f5e 2023-07-21 11:17:41,564 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/table/8e126dbb01804acc9fbca5b5d5e86f5e, entries=8, sequenceid=31, filesize=5.2 K 2023-07-21 11:17:41,565 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.90 KB/6045, heapSize ~11.05 KB/11320, currentSize=0 B/0 for 1588230740 in 109ms, sequenceid=31, compaction requested=false 2023-07-21 11:17:41,565 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 11:17:41,575 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-21 11:17:41,575 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:17:41,576 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 11:17:41,576 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 11:17:41,576 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 11:17:41,654 INFO [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,43393,1689938258412; all regions closed. 2023-07-21 11:17:41,654 DEBUG [RS:1;jenkins-hbase17:43393] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 11:17:41,655 INFO [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,34379,1689938258169; all regions closed. 2023-07-21 11:17:41,655 DEBUG [RS:0;jenkins-hbase17:34379] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 11:17:41,655 DEBUG [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1504): Waiting on 4ae3a8653d7f5d415a8685b0dcb6cad3 2023-07-21 11:17:41,669 DEBUG [RS:1;jenkins-hbase17:43393] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/oldWALs 2023-07-21 11:17:41,670 INFO [RS:1;jenkins-hbase17:43393] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C43393%2C1689938258412:(num 1689938259289) 2023-07-21 11:17:41,670 DEBUG [RS:1;jenkins-hbase17:43393] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:41,670 INFO [RS:1;jenkins-hbase17:43393] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:41,670 DEBUG [RS:0;jenkins-hbase17:34379] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/oldWALs 2023-07-21 11:17:41,670 INFO [RS:0;jenkins-hbase17:34379] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C34379%2C1689938258169.meta:.meta(num 1689938259550) 2023-07-21 11:17:41,670 INFO [RS:1;jenkins-hbase17:43393] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 11:17:41,670 INFO [RS:1;jenkins-hbase17:43393] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:17:41,670 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:41,670 INFO [RS:1;jenkins-hbase17:43393] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:17:41,670 INFO [RS:1;jenkins-hbase17:43393] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:17:41,671 INFO [RS:1;jenkins-hbase17:43393] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:43393 2023-07-21 11:17:41,673 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:41,673 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:41,673 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:41,673 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,43393,1689938258412 2023-07-21 11:17:41,674 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:41,674 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:41,674 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:41,674 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,43393,1689938258412] 2023-07-21 11:17:41,674 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,43393,1689938258412; numProcessing=1 2023-07-21 11:17:41,675 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,43393,1689938258412 already deleted, retry=false 2023-07-21 11:17:41,675 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,43393,1689938258412 expired; onlineServers=2 2023-07-21 11:17:41,677 DEBUG [RS:0;jenkins-hbase17:34379] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/oldWALs 2023-07-21 11:17:41,677 INFO [RS:0;jenkins-hbase17:34379] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C34379%2C1689938258169:(num 1689938259293) 2023-07-21 11:17:41,677 DEBUG [RS:0;jenkins-hbase17:34379] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:41,677 INFO [RS:0;jenkins-hbase17:34379] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:41,677 INFO [RS:0;jenkins-hbase17:34379] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 11:17:41,677 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:41,678 INFO [RS:0;jenkins-hbase17:34379] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:34379 2023-07-21 11:17:41,680 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:41,680 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:41,680 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34379,1689938258169 2023-07-21 11:17:41,680 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,34379,1689938258169] 2023-07-21 11:17:41,681 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,34379,1689938258169; numProcessing=2 2023-07-21 11:17:41,681 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,34379,1689938258169 already deleted, retry=false 2023-07-21 11:17:41,681 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,34379,1689938258169 expired; onlineServers=1 2023-07-21 11:17:41,855 DEBUG [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1504): Waiting on 4ae3a8653d7f5d415a8685b0dcb6cad3 2023-07-21 11:17:41,928 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:41,928 INFO [RS:0;jenkins-hbase17:34379] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,34379,1689938258169; zookeeper connection closed. 2023-07-21 11:17:41,928 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:34379-0x1018798e2b60001, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:41,929 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@d718567] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@d718567 2023-07-21 11:17:41,931 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/namespace/4ae3a8653d7f5d415a8685b0dcb6cad3/.tmp/info/cd770538177740ab9d8e9b176def6344 2023-07-21 11:17:41,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cd770538177740ab9d8e9b176def6344 2023-07-21 11:17:41,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/namespace/4ae3a8653d7f5d415a8685b0dcb6cad3/.tmp/info/cd770538177740ab9d8e9b176def6344 as hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/namespace/4ae3a8653d7f5d415a8685b0dcb6cad3/info/cd770538177740ab9d8e9b176def6344 2023-07-21 11:17:41,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cd770538177740ab9d8e9b176def6344 2023-07-21 11:17:41,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/namespace/4ae3a8653d7f5d415a8685b0dcb6cad3/info/cd770538177740ab9d8e9b176def6344, entries=3, sequenceid=8, filesize=5.0 K 2023-07-21 11:17:41,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 4ae3a8653d7f5d415a8685b0dcb6cad3 in 442ms, sequenceid=8, compaction requested=false 2023-07-21 11:17:41,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 11:17:41,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/data/hbase/namespace/4ae3a8653d7f5d415a8685b0dcb6cad3/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-21 11:17:41,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. 2023-07-21 11:17:41,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 4ae3a8653d7f5d415a8685b0dcb6cad3: 2023-07-21 11:17:41,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689938259625.4ae3a8653d7f5d415a8685b0dcb6cad3. 2023-07-21 11:17:42,028 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:42,028 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:43393-0x1018798e2b60002, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:42,028 INFO [RS:1;jenkins-hbase17:43393] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,43393,1689938258412; zookeeper connection closed. 2023-07-21 11:17:42,031 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1d07971d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1d07971d 2023-07-21 11:17:42,055 INFO [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,36661,1689938258653; all regions closed. 2023-07-21 11:17:42,055 DEBUG [RS:2;jenkins-hbase17:36661] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 11:17:42,062 DEBUG [RS:2;jenkins-hbase17:36661] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/oldWALs 2023-07-21 11:17:42,062 INFO [RS:2;jenkins-hbase17:36661] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C36661%2C1689938258653:(num 1689938259345) 2023-07-21 11:17:42,062 DEBUG [RS:2;jenkins-hbase17:36661] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:42,062 INFO [RS:2;jenkins-hbase17:36661] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:42,062 INFO [RS:2;jenkins-hbase17:36661] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 11:17:42,062 INFO [RS:2;jenkins-hbase17:36661] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:17:42,062 INFO [RS:2;jenkins-hbase17:36661] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:17:42,062 INFO [RS:2;jenkins-hbase17:36661] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:17:42,063 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:42,063 INFO [RS:2;jenkins-hbase17:36661] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:36661 2023-07-21 11:17:42,067 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:42,067 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36661,1689938258653 2023-07-21 11:17:42,068 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,36661,1689938258653] 2023-07-21 11:17:42,068 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,36661,1689938258653; numProcessing=3 2023-07-21 11:17:42,069 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,36661,1689938258653 already deleted, retry=false 2023-07-21 11:17:42,069 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,36661,1689938258653 expired; onlineServers=0 2023-07-21 11:17:42,069 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,45117,1689938257866' ***** 2023-07-21 11:17:42,069 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 11:17:42,070 DEBUG [M:0;jenkins-hbase17:45117] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7ef656a6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:42,070 INFO [M:0;jenkins-hbase17:45117] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:42,072 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:42,072 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:42,072 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:42,072 INFO [M:0;jenkins-hbase17:45117] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1e8e9bbe{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 11:17:42,073 INFO [M:0;jenkins-hbase17:45117] server.AbstractConnector(383): Stopped ServerConnector@3cabcf74{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:42,073 INFO [M:0;jenkins-hbase17:45117] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:42,073 INFO [M:0;jenkins-hbase17:45117] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7c403817{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:42,073 INFO [M:0;jenkins-hbase17:45117] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b6871bd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:42,074 INFO [M:0;jenkins-hbase17:45117] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,45117,1689938257866 2023-07-21 11:17:42,074 INFO [M:0;jenkins-hbase17:45117] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,45117,1689938257866; all regions closed. 2023-07-21 11:17:42,074 DEBUG [M:0;jenkins-hbase17:45117] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:42,074 INFO [M:0;jenkins-hbase17:45117] master.HMaster(1491): Stopping master jetty server 2023-07-21 11:17:42,074 INFO [M:0;jenkins-hbase17:45117] server.AbstractConnector(383): Stopped ServerConnector@2c33bc64{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:42,075 DEBUG [M:0;jenkins-hbase17:45117] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 11:17:42,075 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 11:17:42,075 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938259068] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938259068,5,FailOnTimeoutGroup] 2023-07-21 11:17:42,075 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938259068] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938259068,5,FailOnTimeoutGroup] 2023-07-21 11:17:42,075 DEBUG [M:0;jenkins-hbase17:45117] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 11:17:42,077 INFO [M:0;jenkins-hbase17:45117] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 11:17:42,077 INFO [M:0;jenkins-hbase17:45117] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 11:17:42,077 INFO [M:0;jenkins-hbase17:45117] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 11:17:42,078 DEBUG [M:0;jenkins-hbase17:45117] master.HMaster(1512): Stopping service threads 2023-07-21 11:17:42,078 INFO [M:0;jenkins-hbase17:45117] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 11:17:42,078 ERROR [M:0;jenkins-hbase17:45117] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 11:17:42,079 INFO [M:0;jenkins-hbase17:45117] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 11:17:42,079 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 11:17:42,079 DEBUG [M:0;jenkins-hbase17:45117] zookeeper.ZKUtil(398): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 11:17:42,079 WARN [M:0;jenkins-hbase17:45117] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 11:17:42,079 INFO [M:0;jenkins-hbase17:45117] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 11:17:42,080 INFO [M:0;jenkins-hbase17:45117] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 11:17:42,081 DEBUG [M:0;jenkins-hbase17:45117] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 11:17:42,081 INFO [M:0;jenkins-hbase17:45117] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:42,081 DEBUG [M:0;jenkins-hbase17:45117] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:42,081 DEBUG [M:0;jenkins-hbase17:45117] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 11:17:42,081 DEBUG [M:0;jenkins-hbase17:45117] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:42,081 INFO [M:0;jenkins-hbase17:45117] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.06 KB heapSize=109.20 KB 2023-07-21 11:17:42,107 INFO [M:0;jenkins-hbase17:45117] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.06 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/973f0a658da04bdd9fe84744c2ac6df5 2023-07-21 11:17:42,113 DEBUG [M:0;jenkins-hbase17:45117] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/973f0a658da04bdd9fe84744c2ac6df5 as hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/973f0a658da04bdd9fe84744c2ac6df5 2023-07-21 11:17:42,118 INFO [M:0;jenkins-hbase17:45117] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42461/user/jenkins/test-data/ae6a7e71-44d2-0129-e911-a3cea0d57871/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/973f0a658da04bdd9fe84744c2ac6df5, entries=24, sequenceid=194, filesize=12.4 K 2023-07-21 11:17:42,119 INFO [M:0;jenkins-hbase17:45117] regionserver.HRegion(2948): Finished flush of dataSize ~93.06 KB/95290, heapSize ~109.19 KB/111808, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 38ms, sequenceid=194, compaction requested=false 2023-07-21 11:17:42,121 INFO [M:0;jenkins-hbase17:45117] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:42,121 DEBUG [M:0;jenkins-hbase17:45117] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:17:42,125 INFO [M:0;jenkins-hbase17:45117] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 11:17:42,125 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:42,126 INFO [M:0;jenkins-hbase17:45117] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:45117 2023-07-21 11:17:42,127 DEBUG [M:0;jenkins-hbase17:45117] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,45117,1689938257866 already deleted, retry=false 2023-07-21 11:17:42,168 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:42,168 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): regionserver:36661-0x1018798e2b60003, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:42,168 INFO [RS:2;jenkins-hbase17:36661] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,36661,1689938258653; zookeeper connection closed. 2023-07-21 11:17:42,169 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@54ea13af] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@54ea13af 2023-07-21 11:17:42,169 INFO [Listener at localhost.localdomain/34273] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-21 11:17:42,268 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:42,268 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): master:45117-0x1018798e2b60000, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:42,268 INFO [M:0;jenkins-hbase17:45117] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,45117,1689938257866; zookeeper connection closed. 2023-07-21 11:17:42,269 WARN [Listener at localhost.localdomain/34273] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 11:17:42,274 INFO [Listener at localhost.localdomain/34273] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 11:17:42,378 WARN [BP-864981222-136.243.18.41-1689938256849 heartbeating to localhost.localdomain/127.0.0.1:42461] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 11:17:42,378 WARN [BP-864981222-136.243.18.41-1689938256849 heartbeating to localhost.localdomain/127.0.0.1:42461] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-864981222-136.243.18.41-1689938256849 (Datanode Uuid f15e7f2e-0dbc-4ab6-8164-94133f83e66b) service to localhost.localdomain/127.0.0.1:42461 2023-07-21 11:17:42,379 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/cluster_b86798cd-2ac0-1178-5510-e4b73708cff7/dfs/data/data5/current/BP-864981222-136.243.18.41-1689938256849] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:42,417 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/cluster_b86798cd-2ac0-1178-5510-e4b73708cff7/dfs/data/data6/current/BP-864981222-136.243.18.41-1689938256849] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:42,420 WARN [Listener at localhost.localdomain/34273] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 11:17:42,430 INFO [Listener at localhost.localdomain/34273] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 11:17:42,536 WARN [BP-864981222-136.243.18.41-1689938256849 heartbeating to localhost.localdomain/127.0.0.1:42461] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 11:17:42,536 WARN [BP-864981222-136.243.18.41-1689938256849 heartbeating to localhost.localdomain/127.0.0.1:42461] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-864981222-136.243.18.41-1689938256849 (Datanode Uuid 5335fc3b-aacc-43ff-a74b-10b68b202ff2) service to localhost.localdomain/127.0.0.1:42461 2023-07-21 11:17:42,537 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/cluster_b86798cd-2ac0-1178-5510-e4b73708cff7/dfs/data/data3/current/BP-864981222-136.243.18.41-1689938256849] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:42,538 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/cluster_b86798cd-2ac0-1178-5510-e4b73708cff7/dfs/data/data4/current/BP-864981222-136.243.18.41-1689938256849] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:42,541 WARN [Listener at localhost.localdomain/34273] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 11:17:42,560 INFO [Listener at localhost.localdomain/34273] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 11:17:42,666 WARN [BP-864981222-136.243.18.41-1689938256849 heartbeating to localhost.localdomain/127.0.0.1:42461] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 11:17:42,666 WARN [BP-864981222-136.243.18.41-1689938256849 heartbeating to localhost.localdomain/127.0.0.1:42461] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-864981222-136.243.18.41-1689938256849 (Datanode Uuid 2b053258-5df3-41a9-9601-60ca8e00ac80) service to localhost.localdomain/127.0.0.1:42461 2023-07-21 11:17:42,666 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/cluster_b86798cd-2ac0-1178-5510-e4b73708cff7/dfs/data/data1/current/BP-864981222-136.243.18.41-1689938256849] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:42,667 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/cluster_b86798cd-2ac0-1178-5510-e4b73708cff7/dfs/data/data2/current/BP-864981222-136.243.18.41-1689938256849] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:42,677 INFO [Listener at localhost.localdomain/34273] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-21 11:17:42,790 INFO [Listener at localhost.localdomain/34273] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 11:17:42,822 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-21 11:17:42,822 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 11:17:42,822 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/hadoop.log.dir so I do NOT create it in target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0 2023-07-21 11:17:42,822 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7eed45d4-18c0-215c-eb5a-45fa6f275513/hadoop.tmp.dir so I do NOT create it in target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0 2023-07-21 11:17:42,823 INFO [Listener at localhost.localdomain/34273] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5, deleteOnExit=true 2023-07-21 11:17:42,823 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 11:17:42,823 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/test.cache.data in system properties and HBase conf 2023-07-21 11:17:42,823 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 11:17:42,823 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/hadoop.log.dir in system properties and HBase conf 2023-07-21 11:17:42,823 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 11:17:42,823 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 11:17:42,823 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 11:17:42,823 DEBUG [Listener at localhost.localdomain/34273] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 11:17:42,823 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 11:17:42,824 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 11:17:42,824 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 11:17:42,824 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 11:17:42,824 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 11:17:42,824 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 11:17:42,824 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 11:17:42,824 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 11:17:42,824 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 11:17:42,824 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/nfs.dump.dir in system properties and HBase conf 2023-07-21 11:17:42,825 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/java.io.tmpdir in system properties and HBase conf 2023-07-21 11:17:42,825 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 11:17:42,825 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 11:17:42,825 INFO [Listener at localhost.localdomain/34273] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 11:17:42,827 WARN [Listener at localhost.localdomain/34273] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 11:17:42,828 WARN [Listener at localhost.localdomain/34273] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 11:17:42,853 WARN [Listener at localhost.localdomain/34273] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:17:42,855 INFO [Listener at localhost.localdomain/34273] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:17:42,859 INFO [Listener at localhost.localdomain/34273] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/java.io.tmpdir/Jetty_localhost_localdomain_38773_hdfs____vr8jhk/webapp 2023-07-21 11:17:42,889 DEBUG [Listener at localhost.localdomain/34273-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1018798e2b6000a, quorum=127.0.0.1:62351, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 11:17:42,889 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1018798e2b6000a, quorum=127.0.0.1:62351, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 11:17:42,944 INFO [Listener at localhost.localdomain/34273] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:38773 2023-07-21 11:17:42,951 WARN [Listener at localhost.localdomain/34273] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 11:17:42,951 WARN [Listener at localhost.localdomain/34273] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 11:17:42,980 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:17:42,980 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 11:17:42,980 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 11:17:42,988 WARN [Listener at localhost.localdomain/39155] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:17:43,009 WARN [Listener at localhost.localdomain/39155] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 11:17:43,012 WARN [Listener at localhost.localdomain/39155] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:17:43,013 INFO [Listener at localhost.localdomain/39155] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:17:43,018 INFO [Listener at localhost.localdomain/39155] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/java.io.tmpdir/Jetty_localhost_38039_datanode____.3han48/webapp 2023-07-21 11:17:43,094 INFO [Listener at localhost.localdomain/39155] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38039 2023-07-21 11:17:43,100 WARN [Listener at localhost.localdomain/43995] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:17:43,117 WARN [Listener at localhost.localdomain/43995] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 11:17:43,119 WARN [Listener at localhost.localdomain/43995] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:17:43,120 INFO [Listener at localhost.localdomain/43995] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:17:43,125 INFO [Listener at localhost.localdomain/43995] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/java.io.tmpdir/Jetty_localhost_44425_datanode____.wx3vs2/webapp 2023-07-21 11:17:43,164 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x923ee7aa60ef158e: Processing first storage report for DS-044b3e42-0a47-489d-9295-f48998774954 from datanode ae7bca27-414c-45ea-8ecc-45f3a7f89c08 2023-07-21 11:17:43,164 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x923ee7aa60ef158e: from storage DS-044b3e42-0a47-489d-9295-f48998774954 node DatanodeRegistration(127.0.0.1:40211, datanodeUuid=ae7bca27-414c-45ea-8ecc-45f3a7f89c08, infoPort=46553, infoSecurePort=0, ipcPort=43995, storageInfo=lv=-57;cid=testClusterID;nsid=817142068;c=1689938262829), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:43,164 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x923ee7aa60ef158e: Processing first storage report for DS-9871e1fc-1e6a-4a97-8950-339641b87674 from datanode ae7bca27-414c-45ea-8ecc-45f3a7f89c08 2023-07-21 11:17:43,164 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x923ee7aa60ef158e: from storage DS-9871e1fc-1e6a-4a97-8950-339641b87674 node DatanodeRegistration(127.0.0.1:40211, datanodeUuid=ae7bca27-414c-45ea-8ecc-45f3a7f89c08, infoPort=46553, infoSecurePort=0, ipcPort=43995, storageInfo=lv=-57;cid=testClusterID;nsid=817142068;c=1689938262829), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:43,210 INFO [Listener at localhost.localdomain/43995] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44425 2023-07-21 11:17:43,217 WARN [Listener at localhost.localdomain/41059] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:17:43,239 WARN [Listener at localhost.localdomain/41059] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 11:17:43,241 WARN [Listener at localhost.localdomain/41059] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:17:43,242 INFO [Listener at localhost.localdomain/41059] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:17:43,246 INFO [Listener at localhost.localdomain/41059] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/java.io.tmpdir/Jetty_localhost_43745_datanode____xlj340/webapp 2023-07-21 11:17:43,289 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb5dd29c239e20b5d: Processing first storage report for DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5 from datanode 49b97863-398f-442f-bf84-5540d8083894 2023-07-21 11:17:43,289 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb5dd29c239e20b5d: from storage DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5 node DatanodeRegistration(127.0.0.1:45051, datanodeUuid=49b97863-398f-442f-bf84-5540d8083894, infoPort=39939, infoSecurePort=0, ipcPort=41059, storageInfo=lv=-57;cid=testClusterID;nsid=817142068;c=1689938262829), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:43,289 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb5dd29c239e20b5d: Processing first storage report for DS-822e74e1-281e-4177-af34-e75fbf88b668 from datanode 49b97863-398f-442f-bf84-5540d8083894 2023-07-21 11:17:43,289 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb5dd29c239e20b5d: from storage DS-822e74e1-281e-4177-af34-e75fbf88b668 node DatanodeRegistration(127.0.0.1:45051, datanodeUuid=49b97863-398f-442f-bf84-5540d8083894, infoPort=39939, infoSecurePort=0, ipcPort=41059, storageInfo=lv=-57;cid=testClusterID;nsid=817142068;c=1689938262829), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 11:17:43,333 INFO [Listener at localhost.localdomain/41059] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43745 2023-07-21 11:17:43,342 WARN [Listener at localhost.localdomain/37917] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:17:43,408 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x62473627370210d: Processing first storage report for DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd from datanode a10b7906-dbda-47a3-bcee-c9bf422e15ff 2023-07-21 11:17:43,408 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x62473627370210d: from storage DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd node DatanodeRegistration(127.0.0.1:45343, datanodeUuid=a10b7906-dbda-47a3-bcee-c9bf422e15ff, infoPort=37561, infoSecurePort=0, ipcPort=37917, storageInfo=lv=-57;cid=testClusterID;nsid=817142068;c=1689938262829), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:43,408 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x62473627370210d: Processing first storage report for DS-fb487cb5-bcbf-4b6c-a4ef-4c7c28e32fd2 from datanode a10b7906-dbda-47a3-bcee-c9bf422e15ff 2023-07-21 11:17:43,408 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x62473627370210d: from storage DS-fb487cb5-bcbf-4b6c-a4ef-4c7c28e32fd2 node DatanodeRegistration(127.0.0.1:45343, datanodeUuid=a10b7906-dbda-47a3-bcee-c9bf422e15ff, infoPort=37561, infoSecurePort=0, ipcPort=37917, storageInfo=lv=-57;cid=testClusterID;nsid=817142068;c=1689938262829), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:17:43,453 DEBUG [Listener at localhost.localdomain/37917] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0 2023-07-21 11:17:43,455 INFO [Listener at localhost.localdomain/37917] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/zookeeper_0, clientPort=54201, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 11:17:43,456 INFO [Listener at localhost.localdomain/37917] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54201 2023-07-21 11:17:43,456 INFO [Listener at localhost.localdomain/37917] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:43,457 INFO [Listener at localhost.localdomain/37917] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:43,477 INFO [Listener at localhost.localdomain/37917] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a with version=8 2023-07-21 11:17:43,477 INFO [Listener at localhost.localdomain/37917] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:38415/user/jenkins/test-data/73c9c31f-4444-563a-6597-e9b9636fd1e6/hbase-staging 2023-07-21 11:17:43,478 DEBUG [Listener at localhost.localdomain/37917] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 11:17:43,478 DEBUG [Listener at localhost.localdomain/37917] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 11:17:43,478 DEBUG [Listener at localhost.localdomain/37917] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 11:17:43,478 DEBUG [Listener at localhost.localdomain/37917] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 11:17:43,479 INFO [Listener at localhost.localdomain/37917] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:43,479 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:43,479 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:43,480 INFO [Listener at localhost.localdomain/37917] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:43,480 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:43,480 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:43,480 INFO [Listener at localhost.localdomain/37917] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:43,483 INFO [Listener at localhost.localdomain/37917] ipc.NettyRpcServer(120): Bind to /136.243.18.41:37771 2023-07-21 11:17:43,484 INFO [Listener at localhost.localdomain/37917] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:43,485 INFO [Listener at localhost.localdomain/37917] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:43,486 INFO [Listener at localhost.localdomain/37917] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37771 connecting to ZooKeeper ensemble=127.0.0.1:54201 2023-07-21 11:17:43,494 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:377710x0, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:43,495 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37771-0x1018798f8bd0000 connected 2023-07-21 11:17:43,511 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(164): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:43,511 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(164): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:43,512 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(164): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:43,516 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37771 2023-07-21 11:17:43,516 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37771 2023-07-21 11:17:43,517 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37771 2023-07-21 11:17:43,519 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37771 2023-07-21 11:17:43,519 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37771 2023-07-21 11:17:43,521 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:43,521 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:43,521 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:43,521 INFO [Listener at localhost.localdomain/37917] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 11:17:43,521 INFO [Listener at localhost.localdomain/37917] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:43,521 INFO [Listener at localhost.localdomain/37917] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:43,521 INFO [Listener at localhost.localdomain/37917] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:43,522 INFO [Listener at localhost.localdomain/37917] http.HttpServer(1146): Jetty bound to port 44673 2023-07-21 11:17:43,522 INFO [Listener at localhost.localdomain/37917] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:43,523 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:43,523 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@dbcf554{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:43,524 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:43,524 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10ebf347{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:43,617 INFO [Listener at localhost.localdomain/37917] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:43,619 INFO [Listener at localhost.localdomain/37917] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:43,619 INFO [Listener at localhost.localdomain/37917] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:43,619 INFO [Listener at localhost.localdomain/37917] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:17:43,620 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:43,621 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@529338f3{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/java.io.tmpdir/jetty-0_0_0_0-44673-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5196115297828969448/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 11:17:43,622 INFO [Listener at localhost.localdomain/37917] server.AbstractConnector(333): Started ServerConnector@5c5ae6a{HTTP/1.1, (http/1.1)}{0.0.0.0:44673} 2023-07-21 11:17:43,623 INFO [Listener at localhost.localdomain/37917] server.Server(415): Started @46654ms 2023-07-21 11:17:43,623 INFO [Listener at localhost.localdomain/37917] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a, hbase.cluster.distributed=false 2023-07-21 11:17:43,634 INFO [Listener at localhost.localdomain/37917] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:43,634 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:43,634 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:43,635 INFO [Listener at localhost.localdomain/37917] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:43,635 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:43,635 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:43,635 INFO [Listener at localhost.localdomain/37917] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:43,637 INFO [Listener at localhost.localdomain/37917] ipc.NettyRpcServer(120): Bind to /136.243.18.41:37509 2023-07-21 11:17:43,637 INFO [Listener at localhost.localdomain/37917] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:17:43,638 DEBUG [Listener at localhost.localdomain/37917] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:17:43,638 INFO [Listener at localhost.localdomain/37917] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:43,639 INFO [Listener at localhost.localdomain/37917] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:43,640 INFO [Listener at localhost.localdomain/37917] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37509 connecting to ZooKeeper ensemble=127.0.0.1:54201 2023-07-21 11:17:43,643 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:375090x0, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:43,644 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(164): regionserver:375090x0, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:43,644 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37509-0x1018798f8bd0001 connected 2023-07-21 11:17:43,645 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(164): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:43,645 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(164): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:43,647 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37509 2023-07-21 11:17:43,647 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37509 2023-07-21 11:17:43,647 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37509 2023-07-21 11:17:43,648 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37509 2023-07-21 11:17:43,648 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37509 2023-07-21 11:17:43,649 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:43,650 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:43,650 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:43,650 INFO [Listener at localhost.localdomain/37917] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:17:43,650 INFO [Listener at localhost.localdomain/37917] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:43,650 INFO [Listener at localhost.localdomain/37917] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:43,650 INFO [Listener at localhost.localdomain/37917] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:43,651 INFO [Listener at localhost.localdomain/37917] http.HttpServer(1146): Jetty bound to port 41487 2023-07-21 11:17:43,651 INFO [Listener at localhost.localdomain/37917] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:43,654 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:43,654 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@299bc8ee{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:43,654 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:43,654 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@e9037e3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:43,746 INFO [Listener at localhost.localdomain/37917] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:43,746 INFO [Listener at localhost.localdomain/37917] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:43,747 INFO [Listener at localhost.localdomain/37917] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:43,747 INFO [Listener at localhost.localdomain/37917] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:17:43,747 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:43,748 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@27c6a71{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/java.io.tmpdir/jetty-0_0_0_0-41487-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5086745764044953269/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:43,749 INFO [Listener at localhost.localdomain/37917] server.AbstractConnector(333): Started ServerConnector@50c059fc{HTTP/1.1, (http/1.1)}{0.0.0.0:41487} 2023-07-21 11:17:43,750 INFO [Listener at localhost.localdomain/37917] server.Server(415): Started @46781ms 2023-07-21 11:17:43,759 INFO [Listener at localhost.localdomain/37917] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:43,759 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:43,759 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:43,759 INFO [Listener at localhost.localdomain/37917] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:43,759 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:43,759 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:43,759 INFO [Listener at localhost.localdomain/37917] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:43,761 INFO [Listener at localhost.localdomain/37917] ipc.NettyRpcServer(120): Bind to /136.243.18.41:39253 2023-07-21 11:17:43,762 INFO [Listener at localhost.localdomain/37917] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:17:43,763 DEBUG [Listener at localhost.localdomain/37917] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:17:43,763 INFO [Listener at localhost.localdomain/37917] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:43,764 INFO [Listener at localhost.localdomain/37917] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:43,765 INFO [Listener at localhost.localdomain/37917] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39253 connecting to ZooKeeper ensemble=127.0.0.1:54201 2023-07-21 11:17:43,768 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:392530x0, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:43,769 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(164): regionserver:392530x0, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:43,770 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39253-0x1018798f8bd0002 connected 2023-07-21 11:17:43,771 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(164): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:43,772 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(164): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:43,772 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39253 2023-07-21 11:17:43,772 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39253 2023-07-21 11:17:43,773 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39253 2023-07-21 11:17:43,774 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39253 2023-07-21 11:17:43,774 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39253 2023-07-21 11:17:43,777 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:43,777 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:43,777 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:43,777 INFO [Listener at localhost.localdomain/37917] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:17:43,777 INFO [Listener at localhost.localdomain/37917] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:43,777 INFO [Listener at localhost.localdomain/37917] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:43,778 INFO [Listener at localhost.localdomain/37917] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:43,778 INFO [Listener at localhost.localdomain/37917] http.HttpServer(1146): Jetty bound to port 34115 2023-07-21 11:17:43,778 INFO [Listener at localhost.localdomain/37917] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:43,792 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:43,792 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e08737b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:43,792 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:43,793 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@dce9de5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:43,887 INFO [Listener at localhost.localdomain/37917] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:43,888 INFO [Listener at localhost.localdomain/37917] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:43,888 INFO [Listener at localhost.localdomain/37917] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:43,888 INFO [Listener at localhost.localdomain/37917] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 11:17:43,889 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:43,890 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@29b96fd7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/java.io.tmpdir/jetty-0_0_0_0-34115-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5317927063534927805/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:43,892 INFO [Listener at localhost.localdomain/37917] server.AbstractConnector(333): Started ServerConnector@14467a90{HTTP/1.1, (http/1.1)}{0.0.0.0:34115} 2023-07-21 11:17:43,893 INFO [Listener at localhost.localdomain/37917] server.Server(415): Started @46925ms 2023-07-21 11:17:43,903 INFO [Listener at localhost.localdomain/37917] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:43,903 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:43,903 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:43,903 INFO [Listener at localhost.localdomain/37917] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:43,903 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:43,903 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:43,903 INFO [Listener at localhost.localdomain/37917] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:43,905 INFO [Listener at localhost.localdomain/37917] ipc.NettyRpcServer(120): Bind to /136.243.18.41:36969 2023-07-21 11:17:43,905 INFO [Listener at localhost.localdomain/37917] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:17:43,906 DEBUG [Listener at localhost.localdomain/37917] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:17:43,907 INFO [Listener at localhost.localdomain/37917] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:43,907 INFO [Listener at localhost.localdomain/37917] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:43,908 INFO [Listener at localhost.localdomain/37917] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36969 connecting to ZooKeeper ensemble=127.0.0.1:54201 2023-07-21 11:17:43,911 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:369690x0, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:43,912 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(164): regionserver:369690x0, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:43,912 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36969-0x1018798f8bd0003 connected 2023-07-21 11:17:43,912 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(164): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:43,913 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(164): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:43,913 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36969 2023-07-21 11:17:43,914 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36969 2023-07-21 11:17:43,916 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36969 2023-07-21 11:17:43,916 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36969 2023-07-21 11:17:43,916 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36969 2023-07-21 11:17:43,918 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:43,918 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:43,918 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:43,919 INFO [Listener at localhost.localdomain/37917] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:17:43,919 INFO [Listener at localhost.localdomain/37917] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:43,919 INFO [Listener at localhost.localdomain/37917] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:43,919 INFO [Listener at localhost.localdomain/37917] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:43,919 INFO [Listener at localhost.localdomain/37917] http.HttpServer(1146): Jetty bound to port 35869 2023-07-21 11:17:43,919 INFO [Listener at localhost.localdomain/37917] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:43,923 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:43,923 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@24f42a5b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:43,923 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:43,924 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@c847ff7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:44,016 INFO [Listener at localhost.localdomain/37917] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:44,016 INFO [Listener at localhost.localdomain/37917] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:44,017 INFO [Listener at localhost.localdomain/37917] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:44,017 INFO [Listener at localhost.localdomain/37917] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:17:44,018 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:44,018 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@745678b7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/java.io.tmpdir/jetty-0_0_0_0-35869-hbase-server-2_4_18-SNAPSHOT_jar-_-any-131832875109021440/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:44,019 INFO [Listener at localhost.localdomain/37917] server.AbstractConnector(333): Started ServerConnector@589eb021{HTTP/1.1, (http/1.1)}{0.0.0.0:35869} 2023-07-21 11:17:44,020 INFO [Listener at localhost.localdomain/37917] server.Server(415): Started @47052ms 2023-07-21 11:17:44,021 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:44,024 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@47fec777{HTTP/1.1, (http/1.1)}{0.0.0.0:46223} 2023-07-21 11:17:44,024 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(415): Started @47056ms 2023-07-21 11:17:44,024 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,37771,1689938263479 2023-07-21 11:17:44,025 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 11:17:44,025 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,37771,1689938263479 2023-07-21 11:17:44,026 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:44,026 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:44,026 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:44,026 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:44,026 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:44,028 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:17:44,029 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:17:44,029 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,37771,1689938263479 from backup master directory 2023-07-21 11:17:44,029 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,37771,1689938263479 2023-07-21 11:17:44,029 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 11:17:44,029 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:44,029 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,37771,1689938263479 2023-07-21 11:17:44,044 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/hbase.id with ID: e433026e-7f02-4fd4-a13c-c96e9cd9b4e4 2023-07-21 11:17:44,054 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:44,056 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:44,070 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6bda7a20 to 127.0.0.1:54201 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:44,074 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@43acc2e8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:44,074 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:44,075 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 11:17:44,075 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:44,077 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData/data/master/store-tmp 2023-07-21 11:17:44,089 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:44,089 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 11:17:44,089 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:44,089 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:44,089 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 11:17:44,089 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:44,089 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:44,089 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:17:44,090 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData/WALs/jenkins-hbase17.apache.org,37771,1689938263479 2023-07-21 11:17:44,092 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C37771%2C1689938263479, suffix=, logDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData/WALs/jenkins-hbase17.apache.org,37771,1689938263479, archiveDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData/oldWALs, maxLogs=10 2023-07-21 11:17:44,106 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40211,DS-044b3e42-0a47-489d-9295-f48998774954,DISK] 2023-07-21 11:17:44,106 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45051,DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5,DISK] 2023-07-21 11:17:44,106 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45343,DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd,DISK] 2023-07-21 11:17:44,114 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData/WALs/jenkins-hbase17.apache.org,37771,1689938263479/jenkins-hbase17.apache.org%2C37771%2C1689938263479.1689938264092 2023-07-21 11:17:44,115 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40211,DS-044b3e42-0a47-489d-9295-f48998774954,DISK], DatanodeInfoWithStorage[127.0.0.1:45343,DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd,DISK], DatanodeInfoWithStorage[127.0.0.1:45051,DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5,DISK]] 2023-07-21 11:17:44,115 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:44,115 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:44,115 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:44,115 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:44,117 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:44,119 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 11:17:44,119 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 11:17:44,119 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:44,120 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:44,120 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:44,123 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:17:44,124 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:44,125 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9639046080, jitterRate=-0.10229387879371643}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:44,125 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:17:44,125 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 11:17:44,126 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 11:17:44,126 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 11:17:44,126 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 11:17:44,127 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 11:17:44,127 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-21 11:17:44,127 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 11:17:44,128 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 11:17:44,129 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 11:17:44,130 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 11:17:44,130 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 11:17:44,130 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 11:17:44,132 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:44,132 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 11:17:44,132 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 11:17:44,133 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 11:17:44,133 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:44,133 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:44,133 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:44,133 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:44,134 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:44,137 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,37771,1689938263479, sessionid=0x1018798f8bd0000, setting cluster-up flag (Was=false) 2023-07-21 11:17:44,138 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:44,141 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 11:17:44,141 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,37771,1689938263479 2023-07-21 11:17:44,144 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:44,146 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 11:17:44,146 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,37771,1689938263479 2023-07-21 11:17:44,147 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.hbase-snapshot/.tmp 2023-07-21 11:17:44,148 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 11:17:44,148 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 11:17:44,149 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:44,149 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 11:17:44,150 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 11:17:44,150 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 11:17:44,160 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 11:17:44,160 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 11:17:44,160 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 11:17:44,160 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 11:17:44,161 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:17:44,161 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:17:44,161 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:17:44,161 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:17:44,161 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-07-21 11:17:44,161 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,161 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:44,161 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,162 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689938294162 2023-07-21 11:17:44,163 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 11:17:44,164 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 11:17:44,164 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 11:17:44,164 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 11:17:44,164 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 11:17:44,164 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 11:17:44,164 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,165 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 11:17:44,165 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 11:17:44,165 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 11:17:44,165 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 11:17:44,165 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 11:17:44,165 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 11:17:44,166 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 11:17:44,166 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938264166,5,FailOnTimeoutGroup] 2023-07-21 11:17:44,166 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938264166,5,FailOnTimeoutGroup] 2023-07-21 11:17:44,166 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,166 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 11:17:44,166 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,166 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,166 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:44,176 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:44,177 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:44,177 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a 2023-07-21 11:17:44,189 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:44,190 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 11:17:44,191 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/info 2023-07-21 11:17:44,192 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 11:17:44,192 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:44,192 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 11:17:44,194 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:17:44,194 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 11:17:44,195 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:44,195 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 11:17:44,196 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/table 2023-07-21 11:17:44,197 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 11:17:44,197 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:44,198 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740 2023-07-21 11:17:44,198 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740 2023-07-21 11:17:44,200 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 11:17:44,201 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 11:17:44,203 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:44,204 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10109213280, jitterRate=-0.05850614607334137}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 11:17:44,204 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 11:17:44,204 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 11:17:44,204 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 11:17:44,204 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 11:17:44,204 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 11:17:44,204 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 11:17:44,204 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 11:17:44,204 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 11:17:44,205 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 11:17:44,205 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 11:17:44,205 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 11:17:44,206 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 11:17:44,207 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 11:17:44,222 INFO [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(951): ClusterId : e433026e-7f02-4fd4-a13c-c96e9cd9b4e4 2023-07-21 11:17:44,222 INFO [RS:2;jenkins-hbase17:36969] regionserver.HRegionServer(951): ClusterId : e433026e-7f02-4fd4-a13c-c96e9cd9b4e4 2023-07-21 11:17:44,222 INFO [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(951): ClusterId : e433026e-7f02-4fd4-a13c-c96e9cd9b4e4 2023-07-21 11:17:44,224 DEBUG [RS:2;jenkins-hbase17:36969] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:17:44,224 DEBUG [RS:0;jenkins-hbase17:37509] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:17:44,223 DEBUG [RS:1;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:17:44,226 DEBUG [RS:0;jenkins-hbase17:37509] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:17:44,226 DEBUG [RS:2;jenkins-hbase17:36969] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:17:44,226 DEBUG [RS:2;jenkins-hbase17:36969] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:17:44,226 DEBUG [RS:1;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:17:44,226 DEBUG [RS:1;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:17:44,226 DEBUG [RS:0;jenkins-hbase17:37509] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:17:44,227 DEBUG [RS:2;jenkins-hbase17:36969] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:17:44,228 DEBUG [RS:1;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:17:44,228 DEBUG [RS:2;jenkins-hbase17:36969] zookeeper.ReadOnlyZKClient(139): Connect 0x0ad294e3 to 127.0.0.1:54201 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:44,228 DEBUG [RS:0;jenkins-hbase17:37509] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:17:44,231 DEBUG [RS:1;jenkins-hbase17:39253] zookeeper.ReadOnlyZKClient(139): Connect 0x32171ab7 to 127.0.0.1:54201 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:44,231 DEBUG [RS:0;jenkins-hbase17:37509] zookeeper.ReadOnlyZKClient(139): Connect 0x42f1f520 to 127.0.0.1:54201 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:44,239 DEBUG [RS:2;jenkins-hbase17:36969] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4156bc9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:44,239 DEBUG [RS:0;jenkins-hbase17:37509] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@17a9b0e4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:44,239 DEBUG [RS:2;jenkins-hbase17:36969] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2aff93, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:44,239 DEBUG [RS:1;jenkins-hbase17:39253] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@20e2025d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:44,239 DEBUG [RS:0;jenkins-hbase17:37509] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7379b12d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:44,239 DEBUG [RS:1;jenkins-hbase17:39253] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3efc8bd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:44,246 DEBUG [RS:2;jenkins-hbase17:36969] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase17:36969 2023-07-21 11:17:44,246 INFO [RS:2;jenkins-hbase17:36969] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:17:44,246 INFO [RS:2;jenkins-hbase17:36969] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:17:44,246 DEBUG [RS:2;jenkins-hbase17:36969] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:17:44,247 INFO [RS:2;jenkins-hbase17:36969] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,37771,1689938263479 with isa=jenkins-hbase17.apache.org/136.243.18.41:36969, startcode=1689938263902 2023-07-21 11:17:44,247 DEBUG [RS:2;jenkins-hbase17:36969] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:17:44,247 DEBUG [RS:1;jenkins-hbase17:39253] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:39253 2023-07-21 11:17:44,247 INFO [RS:1;jenkins-hbase17:39253] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:17:44,247 INFO [RS:1;jenkins-hbase17:39253] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:17:44,247 DEBUG [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:17:44,247 INFO [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,37771,1689938263479 with isa=jenkins-hbase17.apache.org/136.243.18.41:39253, startcode=1689938263758 2023-07-21 11:17:44,247 DEBUG [RS:1;jenkins-hbase17:39253] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:17:44,249 DEBUG [RS:0;jenkins-hbase17:37509] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:37509 2023-07-21 11:17:44,249 INFO [RS:0;jenkins-hbase17:37509] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:17:44,249 INFO [RS:0;jenkins-hbase17:37509] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:17:44,249 DEBUG [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:17:44,252 INFO [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,37771,1689938263479 with isa=jenkins-hbase17.apache.org/136.243.18.41:37509, startcode=1689938263634 2023-07-21 11:17:44,252 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:43263, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:17:44,252 DEBUG [RS:0;jenkins-hbase17:37509] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:17:44,252 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:36001, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:17:44,255 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37771] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:44,255 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:44,256 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:52265, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:17:44,256 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 11:17:44,256 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37771] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:44,256 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:44,256 DEBUG [RS:2;jenkins-hbase17:36969] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a 2023-07-21 11:17:44,257 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37771] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:44,257 DEBUG [RS:2;jenkins-hbase17:36969] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:39155 2023-07-21 11:17:44,256 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 11:17:44,257 DEBUG [RS:2;jenkins-hbase17:36969] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44673 2023-07-21 11:17:44,257 DEBUG [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a 2023-07-21 11:17:44,257 DEBUG [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a 2023-07-21 11:17:44,257 DEBUG [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:39155 2023-07-21 11:17:44,257 DEBUG [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:39155 2023-07-21 11:17:44,257 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:44,257 DEBUG [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44673 2023-07-21 11:17:44,257 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 11:17:44,257 DEBUG [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44673 2023-07-21 11:17:44,258 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:44,260 DEBUG [RS:2;jenkins-hbase17:36969] zookeeper.ZKUtil(162): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:44,260 WARN [RS:2;jenkins-hbase17:36969] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:44,260 INFO [RS:2;jenkins-hbase17:36969] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:44,260 DEBUG [RS:0;jenkins-hbase17:37509] zookeeper.ZKUtil(162): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:44,260 DEBUG [RS:2;jenkins-hbase17:36969] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:44,260 WARN [RS:0;jenkins-hbase17:37509] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:44,260 INFO [RS:0;jenkins-hbase17:37509] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:44,260 DEBUG [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:44,261 DEBUG [RS:1;jenkins-hbase17:39253] zookeeper.ZKUtil(162): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:44,261 WARN [RS:1;jenkins-hbase17:39253] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:44,261 INFO [RS:1;jenkins-hbase17:39253] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:44,261 DEBUG [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:44,263 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,37509,1689938263634] 2023-07-21 11:17:44,263 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,36969,1689938263902] 2023-07-21 11:17:44,263 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,39253,1689938263758] 2023-07-21 11:17:44,271 DEBUG [RS:2;jenkins-hbase17:36969] zookeeper.ZKUtil(162): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:44,271 DEBUG [RS:2;jenkins-hbase17:36969] zookeeper.ZKUtil(162): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:44,272 DEBUG [RS:0;jenkins-hbase17:37509] zookeeper.ZKUtil(162): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:44,272 DEBUG [RS:2;jenkins-hbase17:36969] zookeeper.ZKUtil(162): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:44,272 DEBUG [RS:0;jenkins-hbase17:37509] zookeeper.ZKUtil(162): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:44,272 DEBUG [RS:1;jenkins-hbase17:39253] zookeeper.ZKUtil(162): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:44,273 DEBUG [RS:0;jenkins-hbase17:37509] zookeeper.ZKUtil(162): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:44,273 DEBUG [RS:1;jenkins-hbase17:39253] zookeeper.ZKUtil(162): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:44,276 DEBUG [RS:2;jenkins-hbase17:36969] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:17:44,277 INFO [RS:2;jenkins-hbase17:36969] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:17:44,277 DEBUG [RS:0;jenkins-hbase17:37509] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:17:44,278 INFO [RS:0;jenkins-hbase17:37509] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:17:44,278 DEBUG [RS:1;jenkins-hbase17:39253] zookeeper.ZKUtil(162): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:44,278 INFO [RS:2;jenkins-hbase17:36969] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:17:44,279 DEBUG [RS:1;jenkins-hbase17:39253] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:17:44,279 INFO [RS:2;jenkins-hbase17:36969] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:17:44,279 INFO [RS:2;jenkins-hbase17:36969] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,279 INFO [RS:1;jenkins-hbase17:39253] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:17:44,280 INFO [RS:0;jenkins-hbase17:37509] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:17:44,280 INFO [RS:2;jenkins-hbase17:36969] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:17:44,284 INFO [RS:0;jenkins-hbase17:37509] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:17:44,284 INFO [RS:0;jenkins-hbase17:37509] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,285 INFO [RS:1;jenkins-hbase17:39253] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:17:44,286 INFO [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:17:44,287 INFO [RS:1;jenkins-hbase17:39253] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:17:44,287 INFO [RS:1;jenkins-hbase17:39253] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,287 INFO [RS:2;jenkins-hbase17:36969] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,288 INFO [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:17:44,288 INFO [RS:0;jenkins-hbase17:37509] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,288 DEBUG [RS:2;jenkins-hbase17:36969] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,289 DEBUG [RS:0;jenkins-hbase17:37509] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,289 DEBUG [RS:2;jenkins-hbase17:36969] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,289 DEBUG [RS:0;jenkins-hbase17:37509] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,289 DEBUG [RS:2;jenkins-hbase17:36969] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,289 DEBUG [RS:0;jenkins-hbase17:37509] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,289 DEBUG [RS:2;jenkins-hbase17:36969] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,289 DEBUG [RS:0;jenkins-hbase17:37509] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,289 DEBUG [RS:2;jenkins-hbase17:36969] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,289 DEBUG [RS:0;jenkins-hbase17:37509] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:2;jenkins-hbase17:36969] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:44,290 DEBUG [RS:0;jenkins-hbase17:37509] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:44,290 DEBUG [RS:2;jenkins-hbase17:36969] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:0;jenkins-hbase17:37509] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:2;jenkins-hbase17:36969] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 INFO [RS:1;jenkins-hbase17:39253] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,290 DEBUG [RS:2;jenkins-hbase17:36969] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:0;jenkins-hbase17:37509] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:2;jenkins-hbase17:36969] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:0;jenkins-hbase17:37509] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:1;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:0;jenkins-hbase17:37509] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:1;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:1;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:1;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:1;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:1;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:44,290 DEBUG [RS:1;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:1;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:1;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,290 DEBUG [RS:1;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:44,294 INFO [RS:1;jenkins-hbase17:39253] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,294 INFO [RS:2;jenkins-hbase17:36969] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,294 INFO [RS:1;jenkins-hbase17:39253] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,294 INFO [RS:2;jenkins-hbase17:36969] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,294 INFO [RS:1;jenkins-hbase17:39253] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,294 INFO [RS:2;jenkins-hbase17:36969] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,298 INFO [RS:0;jenkins-hbase17:37509] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,299 INFO [RS:0;jenkins-hbase17:37509] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,299 INFO [RS:0;jenkins-hbase17:37509] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,308 INFO [RS:0;jenkins-hbase17:37509] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:17:44,308 INFO [RS:0;jenkins-hbase17:37509] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,37509,1689938263634-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,311 INFO [RS:2;jenkins-hbase17:36969] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:17:44,311 INFO [RS:1;jenkins-hbase17:39253] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:17:44,312 INFO [RS:2;jenkins-hbase17:36969] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36969,1689938263902-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,312 INFO [RS:1;jenkins-hbase17:39253] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,39253,1689938263758-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,318 INFO [RS:0;jenkins-hbase17:37509] regionserver.Replication(203): jenkins-hbase17.apache.org,37509,1689938263634 started 2023-07-21 11:17:44,318 INFO [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,37509,1689938263634, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:37509, sessionid=0x1018798f8bd0001 2023-07-21 11:17:44,319 DEBUG [RS:0;jenkins-hbase17:37509] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:17:44,319 DEBUG [RS:0;jenkins-hbase17:37509] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:44,319 DEBUG [RS:0;jenkins-hbase17:37509] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,37509,1689938263634' 2023-07-21 11:17:44,319 DEBUG [RS:0;jenkins-hbase17:37509] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:17:44,319 DEBUG [RS:0;jenkins-hbase17:37509] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:17:44,319 DEBUG [RS:0;jenkins-hbase17:37509] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:17:44,319 DEBUG [RS:0;jenkins-hbase17:37509] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:17:44,319 DEBUG [RS:0;jenkins-hbase17:37509] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:44,319 DEBUG [RS:0;jenkins-hbase17:37509] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,37509,1689938263634' 2023-07-21 11:17:44,319 DEBUG [RS:0;jenkins-hbase17:37509] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:17:44,320 DEBUG [RS:0;jenkins-hbase17:37509] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:17:44,320 DEBUG [RS:0;jenkins-hbase17:37509] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:17:44,320 INFO [RS:0;jenkins-hbase17:37509] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:17:44,320 INFO [RS:0;jenkins-hbase17:37509] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:17:44,321 INFO [RS:2;jenkins-hbase17:36969] regionserver.Replication(203): jenkins-hbase17.apache.org,36969,1689938263902 started 2023-07-21 11:17:44,321 INFO [RS:2;jenkins-hbase17:36969] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,36969,1689938263902, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:36969, sessionid=0x1018798f8bd0003 2023-07-21 11:17:44,321 DEBUG [RS:2;jenkins-hbase17:36969] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:17:44,321 DEBUG [RS:2;jenkins-hbase17:36969] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:44,321 DEBUG [RS:2;jenkins-hbase17:36969] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36969,1689938263902' 2023-07-21 11:17:44,321 DEBUG [RS:2;jenkins-hbase17:36969] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:17:44,322 DEBUG [RS:2;jenkins-hbase17:36969] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:17:44,322 DEBUG [RS:2;jenkins-hbase17:36969] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:17:44,322 DEBUG [RS:2;jenkins-hbase17:36969] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:17:44,322 DEBUG [RS:2;jenkins-hbase17:36969] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:44,322 DEBUG [RS:2;jenkins-hbase17:36969] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36969,1689938263902' 2023-07-21 11:17:44,322 DEBUG [RS:2;jenkins-hbase17:36969] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:17:44,322 DEBUG [RS:2;jenkins-hbase17:36969] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:17:44,323 DEBUG [RS:2;jenkins-hbase17:36969] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:17:44,323 INFO [RS:2;jenkins-hbase17:36969] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:17:44,323 INFO [RS:2;jenkins-hbase17:36969] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:17:44,326 INFO [RS:1;jenkins-hbase17:39253] regionserver.Replication(203): jenkins-hbase17.apache.org,39253,1689938263758 started 2023-07-21 11:17:44,327 INFO [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,39253,1689938263758, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:39253, sessionid=0x1018798f8bd0002 2023-07-21 11:17:44,327 DEBUG [RS:1;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:17:44,327 DEBUG [RS:1;jenkins-hbase17:39253] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:44,327 DEBUG [RS:1;jenkins-hbase17:39253] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,39253,1689938263758' 2023-07-21 11:17:44,327 DEBUG [RS:1;jenkins-hbase17:39253] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:17:44,327 DEBUG [RS:1;jenkins-hbase17:39253] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:17:44,328 DEBUG [RS:1;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:17:44,328 DEBUG [RS:1;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:17:44,328 DEBUG [RS:1;jenkins-hbase17:39253] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:44,328 DEBUG [RS:1;jenkins-hbase17:39253] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,39253,1689938263758' 2023-07-21 11:17:44,328 DEBUG [RS:1;jenkins-hbase17:39253] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:17:44,328 DEBUG [RS:1;jenkins-hbase17:39253] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:17:44,329 DEBUG [RS:1;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:17:44,329 INFO [RS:1;jenkins-hbase17:39253] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:17:44,329 INFO [RS:1;jenkins-hbase17:39253] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:17:44,358 DEBUG [jenkins-hbase17:37771] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 11:17:44,358 DEBUG [jenkins-hbase17:37771] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:44,358 DEBUG [jenkins-hbase17:37771] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:44,358 DEBUG [jenkins-hbase17:37771] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:44,358 DEBUG [jenkins-hbase17:37771] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:44,358 DEBUG [jenkins-hbase17:37771] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:44,359 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,37509,1689938263634, state=OPENING 2023-07-21 11:17:44,360 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 11:17:44,361 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:44,361 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,37509,1689938263634}] 2023-07-21 11:17:44,362 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:17:44,422 INFO [RS:0;jenkins-hbase17:37509] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C37509%2C1689938263634, suffix=, logDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,37509,1689938263634, archiveDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/oldWALs, maxLogs=32 2023-07-21 11:17:44,424 INFO [RS:2;jenkins-hbase17:36969] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C36969%2C1689938263902, suffix=, logDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,36969,1689938263902, archiveDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/oldWALs, maxLogs=32 2023-07-21 11:17:44,439 INFO [RS:1;jenkins-hbase17:39253] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C39253%2C1689938263758, suffix=, logDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,39253,1689938263758, archiveDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/oldWALs, maxLogs=32 2023-07-21 11:17:44,456 WARN [ReadOnlyZKClient-127.0.0.1:54201@0x6bda7a20] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 11:17:44,456 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,37771,1689938263479] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:44,474 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45051,DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5,DISK] 2023-07-21 11:17:44,474 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40211,DS-044b3e42-0a47-489d-9295-f48998774954,DISK] 2023-07-21 11:17:44,477 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45051,DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5,DISK] 2023-07-21 11:17:44,477 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45343,DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd,DISK] 2023-07-21 11:17:44,481 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45343,DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd,DISK] 2023-07-21 11:17:44,482 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40211,DS-044b3e42-0a47-489d-9295-f48998774954,DISK] 2023-07-21 11:17:44,482 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:51810, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:44,495 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37509] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:51810 deadline: 1689938324482, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:44,495 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40211,DS-044b3e42-0a47-489d-9295-f48998774954,DISK] 2023-07-21 11:17:44,495 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45343,DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd,DISK] 2023-07-21 11:17:44,495 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45051,DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5,DISK] 2023-07-21 11:17:44,497 INFO [RS:2;jenkins-hbase17:36969] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,36969,1689938263902/jenkins-hbase17.apache.org%2C36969%2C1689938263902.1689938264425 2023-07-21 11:17:44,501 INFO [RS:0;jenkins-hbase17:37509] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,37509,1689938263634/jenkins-hbase17.apache.org%2C37509%2C1689938263634.1689938264422 2023-07-21 11:17:44,504 DEBUG [RS:2;jenkins-hbase17:36969] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45051,DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5,DISK], DatanodeInfoWithStorage[127.0.0.1:45343,DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd,DISK], DatanodeInfoWithStorage[127.0.0.1:40211,DS-044b3e42-0a47-489d-9295-f48998774954,DISK]] 2023-07-21 11:17:44,504 DEBUG [RS:0;jenkins-hbase17:37509] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40211,DS-044b3e42-0a47-489d-9295-f48998774954,DISK], DatanodeInfoWithStorage[127.0.0.1:45343,DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd,DISK], DatanodeInfoWithStorage[127.0.0.1:45051,DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5,DISK]] 2023-07-21 11:17:44,505 INFO [RS:1;jenkins-hbase17:39253] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,39253,1689938263758/jenkins-hbase17.apache.org%2C39253%2C1689938263758.1689938264439 2023-07-21 11:17:44,508 DEBUG [RS:1;jenkins-hbase17:39253] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45051,DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5,DISK], DatanodeInfoWithStorage[127.0.0.1:45343,DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd,DISK], DatanodeInfoWithStorage[127.0.0.1:40211,DS-044b3e42-0a47-489d-9295-f48998774954,DISK]] 2023-07-21 11:17:44,518 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:44,519 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:44,521 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:51818, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:44,525 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 11:17:44,525 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:44,528 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C37509%2C1689938263634.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,37509,1689938263634, archiveDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/oldWALs, maxLogs=32 2023-07-21 11:17:44,543 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45343,DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd,DISK] 2023-07-21 11:17:44,543 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40211,DS-044b3e42-0a47-489d-9295-f48998774954,DISK] 2023-07-21 11:17:44,543 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45051,DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5,DISK] 2023-07-21 11:17:44,554 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,37509,1689938263634/jenkins-hbase17.apache.org%2C37509%2C1689938263634.meta.1689938264528.meta 2023-07-21 11:17:44,554 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45343,DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd,DISK], DatanodeInfoWithStorage[127.0.0.1:45051,DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5,DISK], DatanodeInfoWithStorage[127.0.0.1:40211,DS-044b3e42-0a47-489d-9295-f48998774954,DISK]] 2023-07-21 11:17:44,555 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:44,555 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:17:44,555 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 11:17:44,555 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 11:17:44,555 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 11:17:44,555 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:44,555 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 11:17:44,555 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 11:17:44,558 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 11:17:44,559 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/info 2023-07-21 11:17:44,559 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/info 2023-07-21 11:17:44,559 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 11:17:44,560 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:44,560 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 11:17:44,561 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:17:44,561 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:17:44,561 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 11:17:44,562 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:44,562 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 11:17:44,563 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/table 2023-07-21 11:17:44,563 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/table 2023-07-21 11:17:44,563 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 11:17:44,564 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:44,565 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740 2023-07-21 11:17:44,566 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740 2023-07-21 11:17:44,567 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 11:17:44,568 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 11:17:44,569 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9446575040, jitterRate=-0.1202191412448883}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 11:17:44,569 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 11:17:44,570 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689938264517 2023-07-21 11:17:44,573 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 11:17:44,574 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 11:17:44,575 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,37509,1689938263634, state=OPEN 2023-07-21 11:17:44,576 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:17:44,576 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:17:44,577 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 11:17:44,578 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,37509,1689938263634 in 215 msec 2023-07-21 11:17:44,579 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 11:17:44,579 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 373 msec 2023-07-21 11:17:44,580 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 429 msec 2023-07-21 11:17:44,580 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689938264580, completionTime=-1 2023-07-21 11:17:44,580 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 11:17:44,580 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 11:17:44,585 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 11:17:44,585 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689938324585 2023-07-21 11:17:44,585 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689938384585 2023-07-21 11:17:44,586 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-21 11:17:44,594 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,37771,1689938263479-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,594 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,37771,1689938263479-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,594 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,37771,1689938263479-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,594 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:37771, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,594 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:44,594 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 11:17:44,595 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:44,595 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 11:17:44,596 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 11:17:44,597 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:44,598 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:44,600 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/hbase/namespace/6569624467c6eda0eba15844d5358c3c 2023-07-21 11:17:44,600 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/hbase/namespace/6569624467c6eda0eba15844d5358c3c empty. 2023-07-21 11:17:44,601 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/hbase/namespace/6569624467c6eda0eba15844d5358c3c 2023-07-21 11:17:44,601 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 11:17:44,617 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:44,618 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6569624467c6eda0eba15844d5358c3c, NAME => 'hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp 2023-07-21 11:17:44,638 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:44,638 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 6569624467c6eda0eba15844d5358c3c, disabling compactions & flushes 2023-07-21 11:17:44,638 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. 2023-07-21 11:17:44,638 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. 2023-07-21 11:17:44,638 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. after waiting 0 ms 2023-07-21 11:17:44,638 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. 2023-07-21 11:17:44,638 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. 2023-07-21 11:17:44,638 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 6569624467c6eda0eba15844d5358c3c: 2023-07-21 11:17:44,641 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:44,641 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938264641"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938264641"}]},"ts":"1689938264641"} 2023-07-21 11:17:44,647 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:44,648 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:44,648 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938264648"}]},"ts":"1689938264648"} 2023-07-21 11:17:44,649 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 11:17:44,651 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:44,652 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:44,652 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:44,652 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:44,652 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:44,652 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6569624467c6eda0eba15844d5358c3c, ASSIGN}] 2023-07-21 11:17:44,653 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6569624467c6eda0eba15844d5358c3c, ASSIGN 2023-07-21 11:17:44,654 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=6569624467c6eda0eba15844d5358c3c, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,39253,1689938263758; forceNewPlan=false, retain=false 2023-07-21 11:17:44,797 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,37771,1689938263479] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:44,799 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,37771,1689938263479] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 11:17:44,801 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:44,802 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:44,804 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/hbase/rsgroup/5437b1c01d72b0da90c7ad3989ee4c8c 2023-07-21 11:17:44,804 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/hbase/rsgroup/5437b1c01d72b0da90c7ad3989ee4c8c empty. 2023-07-21 11:17:44,805 INFO [jenkins-hbase17:37771] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:44,806 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6569624467c6eda0eba15844d5358c3c, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:44,806 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/hbase/rsgroup/5437b1c01d72b0da90c7ad3989ee4c8c 2023-07-21 11:17:44,806 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938264806"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938264806"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938264806"}]},"ts":"1689938264806"} 2023-07-21 11:17:44,806 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 11:17:44,807 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 6569624467c6eda0eba15844d5358c3c, server=jenkins-hbase17.apache.org,39253,1689938263758}] 2023-07-21 11:17:44,960 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:44,960 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:44,961 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:41786, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:44,967 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. 2023-07-21 11:17:44,967 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6569624467c6eda0eba15844d5358c3c, NAME => 'hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:44,967 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 6569624467c6eda0eba15844d5358c3c 2023-07-21 11:17:44,967 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:44,967 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 6569624467c6eda0eba15844d5358c3c 2023-07-21 11:17:44,967 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 6569624467c6eda0eba15844d5358c3c 2023-07-21 11:17:44,969 INFO [StoreOpener-6569624467c6eda0eba15844d5358c3c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6569624467c6eda0eba15844d5358c3c 2023-07-21 11:17:44,970 DEBUG [StoreOpener-6569624467c6eda0eba15844d5358c3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/namespace/6569624467c6eda0eba15844d5358c3c/info 2023-07-21 11:17:44,970 DEBUG [StoreOpener-6569624467c6eda0eba15844d5358c3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/namespace/6569624467c6eda0eba15844d5358c3c/info 2023-07-21 11:17:44,970 INFO [StoreOpener-6569624467c6eda0eba15844d5358c3c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6569624467c6eda0eba15844d5358c3c columnFamilyName info 2023-07-21 11:17:44,971 INFO [StoreOpener-6569624467c6eda0eba15844d5358c3c-1] regionserver.HStore(310): Store=6569624467c6eda0eba15844d5358c3c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:44,972 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/namespace/6569624467c6eda0eba15844d5358c3c 2023-07-21 11:17:44,972 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/namespace/6569624467c6eda0eba15844d5358c3c 2023-07-21 11:17:44,975 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 6569624467c6eda0eba15844d5358c3c 2023-07-21 11:17:44,977 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/namespace/6569624467c6eda0eba15844d5358c3c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:44,977 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 6569624467c6eda0eba15844d5358c3c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10369730880, jitterRate=-0.03424355387687683}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:44,977 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 6569624467c6eda0eba15844d5358c3c: 2023-07-21 11:17:44,978 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c., pid=7, masterSystemTime=1689938264960 2023-07-21 11:17:44,981 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. 2023-07-21 11:17:44,982 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. 2023-07-21 11:17:44,982 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6569624467c6eda0eba15844d5358c3c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:44,982 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938264982"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938264982"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938264982"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938264982"}]},"ts":"1689938264982"} 2023-07-21 11:17:44,985 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-21 11:17:44,985 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 6569624467c6eda0eba15844d5358c3c, server=jenkins-hbase17.apache.org,39253,1689938263758 in 177 msec 2023-07-21 11:17:44,986 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-21 11:17:44,987 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=6569624467c6eda0eba15844d5358c3c, ASSIGN in 333 msec 2023-07-21 11:17:44,987 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:44,987 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938264987"}]},"ts":"1689938264987"} 2023-07-21 11:17:44,988 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 11:17:44,990 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:44,992 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 395 msec 2023-07-21 11:17:44,997 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 11:17:44,997 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:17:44,997 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:45,001 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:45,002 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:41794, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:45,005 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 11:17:45,015 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:17:45,017 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-07-21 11:17:45,027 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 11:17:45,028 DEBUG [PEWorker-4] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-21 11:17:45,028 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 11:17:45,239 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:45,240 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5437b1c01d72b0da90c7ad3989ee4c8c, NAME => 'hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp 2023-07-21 11:17:45,249 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:45,249 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 5437b1c01d72b0da90c7ad3989ee4c8c, disabling compactions & flushes 2023-07-21 11:17:45,249 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. 2023-07-21 11:17:45,249 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. 2023-07-21 11:17:45,249 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. after waiting 0 ms 2023-07-21 11:17:45,249 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. 2023-07-21 11:17:45,249 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. 2023-07-21 11:17:45,249 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 5437b1c01d72b0da90c7ad3989ee4c8c: 2023-07-21 11:17:45,252 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:45,253 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938265253"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938265253"}]},"ts":"1689938265253"} 2023-07-21 11:17:45,254 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:45,255 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:45,255 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938265255"}]},"ts":"1689938265255"} 2023-07-21 11:17:45,256 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 11:17:45,258 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:45,258 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:45,258 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:45,258 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:45,258 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:45,258 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=5437b1c01d72b0da90c7ad3989ee4c8c, ASSIGN}] 2023-07-21 11:17:45,259 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=5437b1c01d72b0da90c7ad3989ee4c8c, ASSIGN 2023-07-21 11:17:45,259 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=5437b1c01d72b0da90c7ad3989ee4c8c, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,37509,1689938263634; forceNewPlan=false, retain=false 2023-07-21 11:17:45,409 INFO [jenkins-hbase17:37771] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:45,410 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=5437b1c01d72b0da90c7ad3989ee4c8c, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:45,411 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938265410"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938265410"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938265410"}]},"ts":"1689938265410"} 2023-07-21 11:17:45,412 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 5437b1c01d72b0da90c7ad3989ee4c8c, server=jenkins-hbase17.apache.org,37509,1689938263634}] 2023-07-21 11:17:45,567 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. 2023-07-21 11:17:45,567 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5437b1c01d72b0da90c7ad3989ee4c8c, NAME => 'hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:45,567 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:17:45,567 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. service=MultiRowMutationService 2023-07-21 11:17:45,567 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 11:17:45,567 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 5437b1c01d72b0da90c7ad3989ee4c8c 2023-07-21 11:17:45,567 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:45,567 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 5437b1c01d72b0da90c7ad3989ee4c8c 2023-07-21 11:17:45,567 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 5437b1c01d72b0da90c7ad3989ee4c8c 2023-07-21 11:17:45,569 INFO [StoreOpener-5437b1c01d72b0da90c7ad3989ee4c8c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 5437b1c01d72b0da90c7ad3989ee4c8c 2023-07-21 11:17:45,570 DEBUG [StoreOpener-5437b1c01d72b0da90c7ad3989ee4c8c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/rsgroup/5437b1c01d72b0da90c7ad3989ee4c8c/m 2023-07-21 11:17:45,570 DEBUG [StoreOpener-5437b1c01d72b0da90c7ad3989ee4c8c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/rsgroup/5437b1c01d72b0da90c7ad3989ee4c8c/m 2023-07-21 11:17:45,570 INFO [StoreOpener-5437b1c01d72b0da90c7ad3989ee4c8c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5437b1c01d72b0da90c7ad3989ee4c8c columnFamilyName m 2023-07-21 11:17:45,571 INFO [StoreOpener-5437b1c01d72b0da90c7ad3989ee4c8c-1] regionserver.HStore(310): Store=5437b1c01d72b0da90c7ad3989ee4c8c/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:45,571 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/rsgroup/5437b1c01d72b0da90c7ad3989ee4c8c 2023-07-21 11:17:45,572 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/rsgroup/5437b1c01d72b0da90c7ad3989ee4c8c 2023-07-21 11:17:45,574 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 5437b1c01d72b0da90c7ad3989ee4c8c 2023-07-21 11:17:45,576 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/rsgroup/5437b1c01d72b0da90c7ad3989ee4c8c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:45,576 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 5437b1c01d72b0da90c7ad3989ee4c8c; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@6456dd4b, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:45,576 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 5437b1c01d72b0da90c7ad3989ee4c8c: 2023-07-21 11:17:45,577 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c., pid=11, masterSystemTime=1689938265563 2023-07-21 11:17:45,578 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. 2023-07-21 11:17:45,578 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. 2023-07-21 11:17:45,578 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=5437b1c01d72b0da90c7ad3989ee4c8c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:45,578 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938265578"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938265578"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938265578"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938265578"}]},"ts":"1689938265578"} 2023-07-21 11:17:45,580 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-07-21 11:17:45,580 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 5437b1c01d72b0da90c7ad3989ee4c8c, server=jenkins-hbase17.apache.org,37509,1689938263634 in 167 msec 2023-07-21 11:17:45,582 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=6 2023-07-21 11:17:45,582 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=5437b1c01d72b0da90c7ad3989ee4c8c, ASSIGN in 322 msec 2023-07-21 11:17:45,587 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:17:45,589 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 562 msec 2023-07-21 11:17:45,590 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:45,590 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938265590"}]},"ts":"1689938265590"} 2023-07-21 11:17:45,591 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 11:17:45,595 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:45,595 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 11:17:45,596 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 11:17:45,596 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 798 msec 2023-07-21 11:17:45,596 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.566sec 2023-07-21 11:17:45,597 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 11:17:45,597 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 11:17:45,597 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 11:17:45,597 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,37771,1689938263479-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 11:17:45,597 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,37771,1689938263479-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 11:17:45,603 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 11:17:45,605 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 11:17:45,605 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 11:17:45,613 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:45,613 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:45,614 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 11:17:45,615 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 11:17:45,623 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ReadOnlyZKClient(139): Connect 0x16a0d9cc to 127.0.0.1:54201 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:45,657 DEBUG [Listener at localhost.localdomain/37917] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@581bef8f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:45,676 DEBUG [hconnection-0x1f4fb7fb-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:45,679 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:51828, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:45,680 INFO [Listener at localhost.localdomain/37917] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,37771,1689938263479 2023-07-21 11:17:45,681 INFO [Listener at localhost.localdomain/37917] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:45,683 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 11:17:45,685 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:54562, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 11:17:45,687 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 11:17:45,687 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:45,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=false 2023-07-21 11:17:45,689 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ReadOnlyZKClient(139): Connect 0x3f1a4df4 to 127.0.0.1:54201 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:45,695 DEBUG [Listener at localhost.localdomain/37917] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d08d583, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:45,696 INFO [Listener at localhost.localdomain/37917] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:54201 2023-07-21 11:17:45,702 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:45,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:45,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:45,714 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1018798f8bd000a connected 2023-07-21 11:17:45,719 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 11:17:45,741 INFO [Listener at localhost.localdomain/37917] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:17:45,742 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:45,742 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:45,742 INFO [Listener at localhost.localdomain/37917] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:17:45,742 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:17:45,742 INFO [Listener at localhost.localdomain/37917] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:17:45,742 INFO [Listener at localhost.localdomain/37917] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:17:45,743 INFO [Listener at localhost.localdomain/37917] ipc.NettyRpcServer(120): Bind to /136.243.18.41:33755 2023-07-21 11:17:45,743 INFO [Listener at localhost.localdomain/37917] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:17:45,745 DEBUG [Listener at localhost.localdomain/37917] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:17:45,746 INFO [Listener at localhost.localdomain/37917] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:45,747 INFO [Listener at localhost.localdomain/37917] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:17:45,748 INFO [Listener at localhost.localdomain/37917] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33755 connecting to ZooKeeper ensemble=127.0.0.1:54201 2023-07-21 11:17:45,751 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:337550x0, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:17:45,752 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(162): regionserver:337550x0, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:17:45,753 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33755-0x1018798f8bd000b connected 2023-07-21 11:17:45,753 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(162): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 11:17:45,754 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ZKUtil(164): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:17:45,754 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33755 2023-07-21 11:17:45,755 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33755 2023-07-21 11:17:45,755 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33755 2023-07-21 11:17:45,755 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33755 2023-07-21 11:17:45,755 DEBUG [Listener at localhost.localdomain/37917] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33755 2023-07-21 11:17:45,757 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:17:45,757 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:17:45,757 INFO [Listener at localhost.localdomain/37917] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:17:45,757 INFO [Listener at localhost.localdomain/37917] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:17:45,757 INFO [Listener at localhost.localdomain/37917] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:17:45,757 INFO [Listener at localhost.localdomain/37917] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:17:45,757 INFO [Listener at localhost.localdomain/37917] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:17:45,758 INFO [Listener at localhost.localdomain/37917] http.HttpServer(1146): Jetty bound to port 37005 2023-07-21 11:17:45,758 INFO [Listener at localhost.localdomain/37917] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:17:45,761 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:45,761 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@388658bf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:17:45,761 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:45,761 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2a6666b1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:17:45,854 INFO [Listener at localhost.localdomain/37917] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:17:45,855 INFO [Listener at localhost.localdomain/37917] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:17:45,855 INFO [Listener at localhost.localdomain/37917] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:17:45,855 INFO [Listener at localhost.localdomain/37917] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:17:45,856 INFO [Listener at localhost.localdomain/37917] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:17:45,857 INFO [Listener at localhost.localdomain/37917] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@43c8713c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/java.io.tmpdir/jetty-0_0_0_0-37005-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6740791551130640378/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:45,858 INFO [Listener at localhost.localdomain/37917] server.AbstractConnector(333): Started ServerConnector@7621d2b2{HTTP/1.1, (http/1.1)}{0.0.0.0:37005} 2023-07-21 11:17:45,859 INFO [Listener at localhost.localdomain/37917] server.Server(415): Started @48890ms 2023-07-21 11:17:45,861 INFO [RS:3;jenkins-hbase17:33755] regionserver.HRegionServer(951): ClusterId : e433026e-7f02-4fd4-a13c-c96e9cd9b4e4 2023-07-21 11:17:45,864 DEBUG [RS:3;jenkins-hbase17:33755] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:17:45,865 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-21 11:17:45,866 DEBUG [RS:3;jenkins-hbase17:33755] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:17:45,866 DEBUG [RS:3;jenkins-hbase17:33755] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:17:45,867 DEBUG [RS:3;jenkins-hbase17:33755] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:17:45,871 DEBUG [RS:3;jenkins-hbase17:33755] zookeeper.ReadOnlyZKClient(139): Connect 0x258257f1 to 127.0.0.1:54201 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:17:45,878 DEBUG [RS:3;jenkins-hbase17:33755] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2b45bd31, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:17:45,878 DEBUG [RS:3;jenkins-hbase17:33755] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f7a3c81, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:45,891 DEBUG [RS:3;jenkins-hbase17:33755] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase17:33755 2023-07-21 11:17:45,891 INFO [RS:3;jenkins-hbase17:33755] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:17:45,891 INFO [RS:3;jenkins-hbase17:33755] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:17:45,891 DEBUG [RS:3;jenkins-hbase17:33755] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:17:45,892 INFO [RS:3;jenkins-hbase17:33755] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,37771,1689938263479 with isa=jenkins-hbase17.apache.org/136.243.18.41:33755, startcode=1689938265741 2023-07-21 11:17:45,892 DEBUG [RS:3;jenkins-hbase17:33755] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:17:45,898 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:43245, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:17:45,904 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37771] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,33755,1689938265741 2023-07-21 11:17:45,904 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:17:45,905 DEBUG [RS:3;jenkins-hbase17:33755] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a 2023-07-21 11:17:45,905 DEBUG [RS:3;jenkins-hbase17:33755] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:39155 2023-07-21 11:17:45,905 DEBUG [RS:3;jenkins-hbase17:33755] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44673 2023-07-21 11:17:45,908 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:45,908 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:45,908 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:45,908 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:45,908 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:45,909 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 11:17:45,909 DEBUG [RS:3;jenkins-hbase17:33755] zookeeper.ZKUtil(162): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33755,1689938265741 2023-07-21 11:17:45,909 WARN [RS:3;jenkins-hbase17:33755] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:17:45,909 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,33755,1689938265741] 2023-07-21 11:17:45,909 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:45,909 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:45,909 INFO [RS:3;jenkins-hbase17:33755] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:17:45,910 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:45,910 DEBUG [RS:3;jenkins-hbase17:33755] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,33755,1689938265741 2023-07-21 11:17:45,910 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:45,910 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:45,910 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:45,910 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,37771,1689938263479] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 11:17:45,911 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33755,1689938265741 2023-07-21 11:17:45,911 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33755,1689938265741 2023-07-21 11:17:45,911 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:45,911 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33755,1689938265741 2023-07-21 11:17:45,911 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:45,911 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:45,937 DEBUG [RS:3;jenkins-hbase17:33755] zookeeper.ZKUtil(162): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:45,937 DEBUG [RS:3;jenkins-hbase17:33755] zookeeper.ZKUtil(162): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:45,938 DEBUG [RS:3;jenkins-hbase17:33755] zookeeper.ZKUtil(162): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33755,1689938265741 2023-07-21 11:17:45,938 DEBUG [RS:3;jenkins-hbase17:33755] zookeeper.ZKUtil(162): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:45,939 DEBUG [RS:3;jenkins-hbase17:33755] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:17:45,939 INFO [RS:3;jenkins-hbase17:33755] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:17:46,010 INFO [RS:3;jenkins-hbase17:33755] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:17:46,010 INFO [RS:3;jenkins-hbase17:33755] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:17:46,011 INFO [RS:3;jenkins-hbase17:33755] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:46,011 INFO [RS:3;jenkins-hbase17:33755] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:17:46,013 INFO [RS:3;jenkins-hbase17:33755] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:46,014 DEBUG [RS:3;jenkins-hbase17:33755] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:46,014 DEBUG [RS:3;jenkins-hbase17:33755] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:46,014 DEBUG [RS:3;jenkins-hbase17:33755] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:46,014 DEBUG [RS:3;jenkins-hbase17:33755] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:46,014 DEBUG [RS:3;jenkins-hbase17:33755] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:46,014 DEBUG [RS:3;jenkins-hbase17:33755] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:17:46,014 DEBUG [RS:3;jenkins-hbase17:33755] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:46,014 DEBUG [RS:3;jenkins-hbase17:33755] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:46,014 DEBUG [RS:3;jenkins-hbase17:33755] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:46,014 DEBUG [RS:3;jenkins-hbase17:33755] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:17:46,018 INFO [RS:3;jenkins-hbase17:33755] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:46,019 INFO [RS:3;jenkins-hbase17:33755] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:46,019 INFO [RS:3;jenkins-hbase17:33755] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:46,029 INFO [RS:3;jenkins-hbase17:33755] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:17:46,029 INFO [RS:3;jenkins-hbase17:33755] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33755,1689938265741-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:17:46,039 INFO [RS:3;jenkins-hbase17:33755] regionserver.Replication(203): jenkins-hbase17.apache.org,33755,1689938265741 started 2023-07-21 11:17:46,039 INFO [RS:3;jenkins-hbase17:33755] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,33755,1689938265741, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:33755, sessionid=0x1018798f8bd000b 2023-07-21 11:17:46,039 DEBUG [RS:3;jenkins-hbase17:33755] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:17:46,039 DEBUG [RS:3;jenkins-hbase17:33755] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,33755,1689938265741 2023-07-21 11:17:46,039 DEBUG [RS:3;jenkins-hbase17:33755] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33755,1689938265741' 2023-07-21 11:17:46,039 DEBUG [RS:3;jenkins-hbase17:33755] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:17:46,040 DEBUG [RS:3;jenkins-hbase17:33755] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:17:46,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:46,040 DEBUG [RS:3;jenkins-hbase17:33755] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:17:46,040 DEBUG [RS:3;jenkins-hbase17:33755] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:17:46,040 DEBUG [RS:3;jenkins-hbase17:33755] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,33755,1689938265741 2023-07-21 11:17:46,040 DEBUG [RS:3;jenkins-hbase17:33755] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33755,1689938265741' 2023-07-21 11:17:46,040 DEBUG [RS:3;jenkins-hbase17:33755] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:17:46,040 DEBUG [RS:3;jenkins-hbase17:33755] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:17:46,041 DEBUG [RS:3;jenkins-hbase17:33755] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:17:46,041 INFO [RS:3;jenkins-hbase17:33755] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:17:46,041 INFO [RS:3;jenkins-hbase17:33755] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:17:46,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:46,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:46,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:46,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:46,045 DEBUG [hconnection-0x56c346ec-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:17:46,047 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:51842, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:17:46,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:46,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:46,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37771] to rsgroup master 2023-07-21 11:17:46,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:46,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:54562 deadline: 1689939466056, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. 2023-07-21 11:17:46,057 WARN [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:46,059 INFO [Listener at localhost.localdomain/37917] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:46,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:46,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:46,060 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33755, jenkins-hbase17.apache.org:36969, jenkins-hbase17.apache.org:37509, jenkins-hbase17.apache.org:39253], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:46,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:46,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:46,116 INFO [Listener at localhost.localdomain/37917] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=558 (was 502) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: jenkins-hbase17:39253Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x16a0d9cc sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1521805986.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62351@0x30086fa6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1521805986.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase17:37771 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2134711981-2224 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 39155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1799127051) connection to localhost.localdomain/127.0.0.1:42461 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1799127051) connection to localhost.localdomain/127.0.0.1:42461 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1799127051) connection to localhost.localdomain/127.0.0.1:39155 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost.localdomain/37917-SendThread(127.0.0.1:54201) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost.localdomain/37917-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS:3;jenkins-hbase17:33755 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x258257f1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1521805986.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 41059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp190696872-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1228484640-2289 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/688699424.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37917.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp2134711981-2219-acceptor-0@162b7908-ServerConnector@50c059fc{HTTP/1.1, (http/1.1)}{0.0.0.0:41487} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 3 on default port 43995 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost.localdomain:42461 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x6bda7a20-SendThread(127.0.0.1:54201) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-536-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1708911443_17 at /127.0.0.1:46460 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp190696872-2253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 608566466@qtp-1859722537-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38039 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: 933617453@qtp-1533122797-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43745 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1675094850@qtp-1195391071-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44425 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x42f1f520-SendThread(127.0.0.1:54201) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data2/current/BP-1487723784-136.243.18.41-1689938262829 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_214056478_17 at /127.0.0.1:54426 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase17:37509-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1698355968-2278 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/688699424.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@8ba3c63 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 39155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Session-HouseKeeper-4eeb5087-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2134711981-2218 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/688699424.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1567836308-2192 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37917-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37917-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@43c5f4c4 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x42f1f520-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37917-SendThread(127.0.0.1:54201) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x3f1a4df4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1521805986.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1698355968-2279-acceptor-0@339923f3-ServerConnector@589eb021{HTTP/1.1, (http/1.1)}{0.0.0.0:35869} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/37917.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1228484640-2291 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/688699424.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37509 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData-prefix:jenkins-hbase17.apache.org,37771,1689938263479 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x42f1f520 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1521805986.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1698355968-2285 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x56c346ec-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost.localdomain:39155 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1567836308-2187 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/688699424.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@4083dc1c sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 39155 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1459929626@qtp-86513967-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-1487723784-136.243.18.41-1689938262829 heartbeating to localhost.localdomain/127.0.0.1:39155 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a-prefix:jenkins-hbase17.apache.org,39253,1689938263758 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 41059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 2 on default port 43995 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost.localdomain:42461 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1487723784-136.243.18.41-1689938262829 heartbeating to localhost.localdomain/127.0.0.1:39155 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x6bda7a20-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,45117,1689938257866 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/34273-SendThread(127.0.0.1:62351) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 41059 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1799127051) connection to localhost.localdomain/127.0.0.1:42461 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Session-HouseKeeper-59e26ac-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_214056478_17 at /127.0.0.1:50608 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37509 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@341e4bf java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1567836308-2190 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37509 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost.localdomain:39155 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37917 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:33755Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37509 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-556-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5ee4ca19-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@23ecca52 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1799127051) connection to localhost.localdomain/127.0.0.1:39155 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x5ee4ca19-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:54201 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:42461 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,37771,1689938263479 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62351@0x30086fa6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1708911443_17 at /127.0.0.1:50664 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37509 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data4/current/BP-1487723784-136.243.18.41-1689938262829 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@148030eb java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3ad6aedf-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1799127051) connection to localhost.localdomain/127.0.0.1:39155 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_214056478_17 at /127.0.0.1:46466 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1698355968-2281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x6bda7a20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1521805986.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a-prefix:jenkins-hbase17.apache.org,37509,1689938263634 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase17:39253-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 37917 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp250987554-2561 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43995 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp2134711981-2225 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37509 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x5ee4ca19-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x32171ab7-SendThread(127.0.0.1:54201) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x0ad294e3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1521805986.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x32171ab7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1521805986.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data6/current/BP-1487723784-136.243.18.41-1689938262829 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1698355968-2280 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 372231202@qtp-1859722537-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Listener at localhost.localdomain/37917-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 0 on default port 43995 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp250987554-2562 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp250987554-2559 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase17:36969-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 41059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1799127051) connection to localhost.localdomain/127.0.0.1:42461 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37509 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2134711981-2223 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@6f59f858 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 39155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp190696872-2254 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 37917 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1567836308-2194 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@5caea717 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data3/current/BP-1487723784-136.243.18.41-1689938262829 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1228484640-2292 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/688699424.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1698355968-2282 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_214056478_17 at /127.0.0.1:54348 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x258257f1-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x5ee4ca19-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-750598600_17 at /127.0.0.1:50576 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-750598600_17 at /127.0.0.1:54368 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 37917 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_214056478_17 at /127.0.0.1:54402 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:54201): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost.localdomain:42461 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_214056478_17 at /127.0.0.1:50632 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 41059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 37917 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a-prefix:jenkins-hbase17.apache.org,36969,1689938263902 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 43995 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp190696872-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5ee4ca19-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1698355968-2284 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@17ff8dd5 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:37509Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1567836308-2191 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 37917 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp250987554-2557 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/688699424.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1942948133_17 at /127.0.0.1:50610 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37917-SendThread(127.0.0.1:54201) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x3f1a4df4-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data1/current/BP-1487723784-136.243.18.41-1689938262829 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp190696872-2248 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/688699424.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x16a0d9cc-SendThread(127.0.0.1:54201) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@391a3d7c java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-750598600_17 at /127.0.0.1:46414 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@3c16c0c3[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp190696872-2250 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1942948133_17 at /127.0.0.1:54408 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp250987554-2560 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62351@0x30086fa6-SendThread(127.0.0.1:62351) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: Listener at localhost.localdomain/37917.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2134711981-2220 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1228484640-2295 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1799127051) connection to localhost.localdomain/127.0.0.1:39155 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1567836308-2188-acceptor-0@4811be93-ServerConnector@5c5ae6a{HTTP/1.1, (http/1.1)}{0.0.0.0:44673} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37917-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938264166 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1284808289@qtp-1533122797-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp250987554-2564 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1228484640-2296 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37509 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x56c346ec-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase17:36969 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37917-SendThread(127.0.0.1:54201) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x5ee4ca19-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:39155 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:36969Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x0ad294e3-SendThread(127.0.0.1:54201) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (1799127051) connection to localhost.localdomain/127.0.0.1:39155 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1228484640-2294 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x16a0d9cc-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x32171ab7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@57045d1b[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 37917 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost.localdomain/37917-SendThread(127.0.0.1:54201) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@3a46e02b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp250987554-2558-acceptor-0@3bf0ef89-ServerConnector@7621d2b2{HTTP/1.1, (http/1.1)}{0.0.0.0:37005} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1228484640-2293-acceptor-0@b46fe5f-ServerConnector@47fec777{HTTP/1.1, (http/1.1)}{0.0.0.0:46223} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5ee4ca19-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost.localdomain:39155 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost.localdomain/34273-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-750598600_17 at /127.0.0.1:46380 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-538-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase17:33755-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(450211261) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: qtp250987554-2563 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1567836308-2189 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a-prefix:jenkins-hbase17.apache.org,37509,1689938263634.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1708911443_17 at /127.0.0.1:50618 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938264166 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x0ad294e3-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@515424ba java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1250330880@qtp-1195391071-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Client (1799127051) connection to localhost.localdomain/127.0.0.1:42461 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@eab6fc9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1487723784-136.243.18.41-1689938262829 heartbeating to localhost.localdomain/127.0.0.1:39155 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_214056478_17 at /127.0.0.1:46438 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-50e8809c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2134711981-2222 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37917.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 3 on default port 39155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost.localdomain/37917-SendThread(127.0.0.1:54201) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1698355968-2283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5ee4ca19-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37509 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37509 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 43995 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost.localdomain/37917-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 0 on default port 39155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1942948133_17 at /127.0.0.1:46450 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x258257f1-SendThread(127.0.0.1:54201) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54201@0x3f1a4df4-SendThread(127.0.0.1:54201) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@52d01fe9[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:37771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1799127051) connection to localhost.localdomain/127.0.0.1:39155 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: 1175377389@qtp-86513967-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:38773 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: hconnection-0x1f4fb7fb-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1228484640-2290 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/688699424.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS:1;jenkins-hbase17:39253 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp190696872-2251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-b024a04-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 41059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1567836308-2193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data5/current/BP-1487723784-136.243.18.41-1689938262829 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1487723784-136.243.18.41-1689938262829:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1708911443_17 at /127.0.0.1:54412 [Receiving block BP-1487723784-136.243.18.41-1689938262829:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp190696872-2249-acceptor-0@3b5fc381-ServerConnector@14467a90{HTTP/1.1, (http/1.1)}{0.0.0.0:34115} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2134711981-2221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase17:37509 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=837 (was 778) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=580 (was 649), ProcessCount=183 (was 185), AvailableMemoryMB=2998 (was 3338) 2023-07-21 11:17:46,118 WARN [Listener at localhost.localdomain/37917] hbase.ResourceChecker(130): Thread=558 is superior to 500 2023-07-21 11:17:46,135 INFO [Listener at localhost.localdomain/37917] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=558, OpenFileDescriptor=837, MaxFileDescriptor=60000, SystemLoadAverage=580, ProcessCount=183, AvailableMemoryMB=2997 2023-07-21 11:17:46,136 WARN [Listener at localhost.localdomain/37917] hbase.ResourceChecker(130): Thread=558 is superior to 500 2023-07-21 11:17:46,136 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-21 11:17:46,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:46,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:46,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:46,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:46,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:46,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:46,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:46,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:46,142 INFO [RS:3;jenkins-hbase17:33755] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C33755%2C1689938265741, suffix=, logDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,33755,1689938265741, archiveDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/oldWALs, maxLogs=32 2023-07-21 11:17:46,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:46,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:46,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:46,148 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:46,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:46,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:46,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:46,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:46,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:46,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:46,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:46,169 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40211,DS-044b3e42-0a47-489d-9295-f48998774954,DISK] 2023-07-21 11:17:46,169 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45051,DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5,DISK] 2023-07-21 11:17:46,169 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45343,DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd,DISK] 2023-07-21 11:17:46,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37771] to rsgroup master 2023-07-21 11:17:46,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:46,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:54562 deadline: 1689939466172, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. 2023-07-21 11:17:46,173 WARN [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:46,174 INFO [Listener at localhost.localdomain/37917] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:46,175 INFO [RS:3;jenkins-hbase17:33755] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/WALs/jenkins-hbase17.apache.org,33755,1689938265741/jenkins-hbase17.apache.org%2C33755%2C1689938265741.1689938266143 2023-07-21 11:17:46,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:46,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:46,175 DEBUG [RS:3;jenkins-hbase17:33755] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40211,DS-044b3e42-0a47-489d-9295-f48998774954,DISK], DatanodeInfoWithStorage[127.0.0.1:45051,DS-488ee4b5-1933-4b27-8edd-ef92db9d53f5,DISK], DatanodeInfoWithStorage[127.0.0.1:45343,DS-dadc5c6a-fe7e-4a9e-82a4-3ac7321013bd,DISK]] 2023-07-21 11:17:46,175 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33755, jenkins-hbase17.apache.org:36969, jenkins-hbase17.apache.org:37509, jenkins-hbase17.apache.org:39253], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:46,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:46,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:46,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:46,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-21 11:17:46,180 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:46,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-21 11:17:46,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 11:17:46,181 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:46,182 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:46,182 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:46,183 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:17:46,185 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/default/t1/399a800cb68708f287787dd1753a6f30 2023-07-21 11:17:46,185 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/default/t1/399a800cb68708f287787dd1753a6f30 empty. 2023-07-21 11:17:46,186 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/default/t1/399a800cb68708f287787dd1753a6f30 2023-07-21 11:17:46,186 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-21 11:17:46,200 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-21 11:17:46,201 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 399a800cb68708f287787dd1753a6f30, NAME => 't1,,1689938266177.399a800cb68708f287787dd1753a6f30.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp 2023-07-21 11:17:46,213 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689938266177.399a800cb68708f287787dd1753a6f30.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:46,213 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 399a800cb68708f287787dd1753a6f30, disabling compactions & flushes 2023-07-21 11:17:46,213 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689938266177.399a800cb68708f287787dd1753a6f30. 2023-07-21 11:17:46,213 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689938266177.399a800cb68708f287787dd1753a6f30. 2023-07-21 11:17:46,213 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689938266177.399a800cb68708f287787dd1753a6f30. after waiting 0 ms 2023-07-21 11:17:46,213 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689938266177.399a800cb68708f287787dd1753a6f30. 2023-07-21 11:17:46,213 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689938266177.399a800cb68708f287787dd1753a6f30. 2023-07-21 11:17:46,213 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 399a800cb68708f287787dd1753a6f30: 2023-07-21 11:17:46,215 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:17:46,216 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689938266177.399a800cb68708f287787dd1753a6f30.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689938266216"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938266216"}]},"ts":"1689938266216"} 2023-07-21 11:17:46,217 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:17:46,218 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:17:46,218 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938266218"}]},"ts":"1689938266218"} 2023-07-21 11:17:46,219 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-21 11:17:46,221 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:17:46,221 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:17:46,221 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:17:46,221 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:17:46,222 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 11:17:46,222 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:17:46,222 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=399a800cb68708f287787dd1753a6f30, ASSIGN}] 2023-07-21 11:17:46,223 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=399a800cb68708f287787dd1753a6f30, ASSIGN 2023-07-21 11:17:46,224 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=399a800cb68708f287787dd1753a6f30, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36969,1689938263902; forceNewPlan=false, retain=false 2023-07-21 11:17:46,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 11:17:46,374 INFO [jenkins-hbase17:37771] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:17:46,375 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=399a800cb68708f287787dd1753a6f30, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:46,376 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689938266177.399a800cb68708f287787dd1753a6f30.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689938266375"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938266375"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938266375"}]},"ts":"1689938266375"} 2023-07-21 11:17:46,377 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 399a800cb68708f287787dd1753a6f30, server=jenkins-hbase17.apache.org,36969,1689938263902}] 2023-07-21 11:17:46,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 11:17:46,529 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:46,530 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:17:46,530 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:44882, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:17:46,534 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open t1,,1689938266177.399a800cb68708f287787dd1753a6f30. 2023-07-21 11:17:46,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 399a800cb68708f287787dd1753a6f30, NAME => 't1,,1689938266177.399a800cb68708f287787dd1753a6f30.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:17:46,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 399a800cb68708f287787dd1753a6f30 2023-07-21 11:17:46,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated t1,,1689938266177.399a800cb68708f287787dd1753a6f30.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:17:46,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 399a800cb68708f287787dd1753a6f30 2023-07-21 11:17:46,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 399a800cb68708f287787dd1753a6f30 2023-07-21 11:17:46,535 INFO [StoreOpener-399a800cb68708f287787dd1753a6f30-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 399a800cb68708f287787dd1753a6f30 2023-07-21 11:17:46,536 DEBUG [StoreOpener-399a800cb68708f287787dd1753a6f30-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/default/t1/399a800cb68708f287787dd1753a6f30/cf1 2023-07-21 11:17:46,537 DEBUG [StoreOpener-399a800cb68708f287787dd1753a6f30-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/default/t1/399a800cb68708f287787dd1753a6f30/cf1 2023-07-21 11:17:46,537 INFO [StoreOpener-399a800cb68708f287787dd1753a6f30-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 399a800cb68708f287787dd1753a6f30 columnFamilyName cf1 2023-07-21 11:17:46,537 INFO [StoreOpener-399a800cb68708f287787dd1753a6f30-1] regionserver.HStore(310): Store=399a800cb68708f287787dd1753a6f30/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:17:46,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/default/t1/399a800cb68708f287787dd1753a6f30 2023-07-21 11:17:46,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/default/t1/399a800cb68708f287787dd1753a6f30 2023-07-21 11:17:46,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 399a800cb68708f287787dd1753a6f30 2023-07-21 11:17:46,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/default/t1/399a800cb68708f287787dd1753a6f30/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:17:46,543 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 399a800cb68708f287787dd1753a6f30; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11261449920, jitterRate=0.048804253339767456}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:17:46,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 399a800cb68708f287787dd1753a6f30: 2023-07-21 11:17:46,544 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689938266177.399a800cb68708f287787dd1753a6f30., pid=14, masterSystemTime=1689938266529 2023-07-21 11:17:46,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689938266177.399a800cb68708f287787dd1753a6f30. 2023-07-21 11:17:46,548 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened t1,,1689938266177.399a800cb68708f287787dd1753a6f30. 2023-07-21 11:17:46,548 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=399a800cb68708f287787dd1753a6f30, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:46,548 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689938266177.399a800cb68708f287787dd1753a6f30.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689938266548"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938266548"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938266548"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938266548"}]},"ts":"1689938266548"} 2023-07-21 11:17:46,551 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-21 11:17:46,551 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 399a800cb68708f287787dd1753a6f30, server=jenkins-hbase17.apache.org,36969,1689938263902 in 172 msec 2023-07-21 11:17:46,551 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 11:17:46,552 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=399a800cb68708f287787dd1753a6f30, ASSIGN in 329 msec 2023-07-21 11:17:46,552 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:17:46,552 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938266552"}]},"ts":"1689938266552"} 2023-07-21 11:17:46,553 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-21 11:17:46,554 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:17:46,555 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 377 msec 2023-07-21 11:17:46,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 11:17:46,784 INFO [Listener at localhost.localdomain/37917] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-21 11:17:46,784 DEBUG [Listener at localhost.localdomain/37917] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-21 11:17:46,785 INFO [Listener at localhost.localdomain/37917] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:46,787 INFO [Listener at localhost.localdomain/37917] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-21 11:17:46,787 INFO [Listener at localhost.localdomain/37917] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:46,787 INFO [Listener at localhost.localdomain/37917] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-21 11:17:46,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:17:46,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-21 11:17:46,791 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:17:46,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-21 11:17:46,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 136.243.18.41:54562 deadline: 1689938326788, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-21 11:17:46,795 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-21 11:17:46,831 INFO [Listener at localhost.localdomain/37917] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:46,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:46,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:46,832 INFO [Listener at localhost.localdomain/37917] client.HBaseAdmin$15(890): Started disable of t1 2023-07-21 11:17:46,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable t1 2023-07-21 11:17:46,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-21 11:17:46,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 11:17:46,836 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938266836"}]},"ts":"1689938266836"} 2023-07-21 11:17:46,837 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-21 11:17:46,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 11:17:46,977 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-21 11:17:46,979 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=399a800cb68708f287787dd1753a6f30, UNASSIGN}] 2023-07-21 11:17:46,979 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=399a800cb68708f287787dd1753a6f30, UNASSIGN 2023-07-21 11:17:46,980 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=399a800cb68708f287787dd1753a6f30, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:46,980 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689938266177.399a800cb68708f287787dd1753a6f30.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689938266980"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938266980"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938266980"}]},"ts":"1689938266980"} 2023-07-21 11:17:46,981 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 399a800cb68708f287787dd1753a6f30, server=jenkins-hbase17.apache.org,36969,1689938263902}] 2023-07-21 11:17:47,133 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 399a800cb68708f287787dd1753a6f30 2023-07-21 11:17:47,133 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 399a800cb68708f287787dd1753a6f30, disabling compactions & flushes 2023-07-21 11:17:47,133 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region t1,,1689938266177.399a800cb68708f287787dd1753a6f30. 2023-07-21 11:17:47,133 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689938266177.399a800cb68708f287787dd1753a6f30. 2023-07-21 11:17:47,133 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689938266177.399a800cb68708f287787dd1753a6f30. after waiting 0 ms 2023-07-21 11:17:47,133 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689938266177.399a800cb68708f287787dd1753a6f30. 2023-07-21 11:17:47,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/default/t1/399a800cb68708f287787dd1753a6f30/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:17:47,137 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed t1,,1689938266177.399a800cb68708f287787dd1753a6f30. 2023-07-21 11:17:47,137 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 399a800cb68708f287787dd1753a6f30: 2023-07-21 11:17:47,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 11:17:47,139 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 399a800cb68708f287787dd1753a6f30 2023-07-21 11:17:47,139 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=399a800cb68708f287787dd1753a6f30, regionState=CLOSED 2023-07-21 11:17:47,139 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689938266177.399a800cb68708f287787dd1753a6f30.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689938267139"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938267139"}]},"ts":"1689938267139"} 2023-07-21 11:17:47,146 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-21 11:17:47,146 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 399a800cb68708f287787dd1753a6f30, server=jenkins-hbase17.apache.org,36969,1689938263902 in 164 msec 2023-07-21 11:17:47,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-21 11:17:47,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=399a800cb68708f287787dd1753a6f30, UNASSIGN in 167 msec 2023-07-21 11:17:47,148 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938267148"}]},"ts":"1689938267148"} 2023-07-21 11:17:47,149 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-21 11:17:47,150 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-21 11:17:47,152 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 318 msec 2023-07-21 11:17:47,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 11:17:47,439 INFO [Listener at localhost.localdomain/37917] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-21 11:17:47,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete t1 2023-07-21 11:17:47,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-21 11:17:47,442 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-21 11:17:47,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-21 11:17:47,443 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-21 11:17:47,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:47,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:47,445 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/default/t1/399a800cb68708f287787dd1753a6f30 2023-07-21 11:17:47,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 11:17:47,447 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/default/t1/399a800cb68708f287787dd1753a6f30/cf1, FileablePath, hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/default/t1/399a800cb68708f287787dd1753a6f30/recovered.edits] 2023-07-21 11:17:47,451 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/default/t1/399a800cb68708f287787dd1753a6f30/recovered.edits/4.seqid to hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/archive/data/default/t1/399a800cb68708f287787dd1753a6f30/recovered.edits/4.seqid 2023-07-21 11:17:47,452 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/.tmp/data/default/t1/399a800cb68708f287787dd1753a6f30 2023-07-21 11:17:47,452 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-21 11:17:47,454 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-21 11:17:47,455 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-21 11:17:47,456 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-21 11:17:47,457 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-21 11:17:47,457 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-21 11:17:47,458 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689938266177.399a800cb68708f287787dd1753a6f30.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938267457"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:47,459 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 11:17:47,459 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 399a800cb68708f287787dd1753a6f30, NAME => 't1,,1689938266177.399a800cb68708f287787dd1753a6f30.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 11:17:47,459 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-21 11:17:47,459 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938267459"}]},"ts":"9223372036854775807"} 2023-07-21 11:17:47,461 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-21 11:17:47,464 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-21 11:17:47,465 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 24 msec 2023-07-21 11:17:47,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 11:17:47,547 INFO [Listener at localhost.localdomain/37917] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-21 11:17:47,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:47,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:47,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:47,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:47,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:47,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:47,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:47,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:47,559 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:47,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:47,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:47,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:47,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:47,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37771] to rsgroup master 2023-07-21 11:17:47,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:47,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.CallRunner(144): callId: 107 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:54562 deadline: 1689939467567, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. 2023-07-21 11:17:47,568 WARN [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:47,571 INFO [Listener at localhost.localdomain/37917] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:47,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,572 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33755, jenkins-hbase17.apache.org:36969, jenkins-hbase17.apache.org:37509, jenkins-hbase17.apache.org:39253], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:47,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:47,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:47,594 INFO [Listener at localhost.localdomain/37917] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=572 (was 558) - Thread LEAK? -, OpenFileDescriptor=851 (was 837) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=580 (was 580), ProcessCount=183 (was 183), AvailableMemoryMB=2988 (was 2997) 2023-07-21 11:17:47,594 WARN [Listener at localhost.localdomain/37917] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-21 11:17:47,616 INFO [Listener at localhost.localdomain/37917] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=572, OpenFileDescriptor=851, MaxFileDescriptor=60000, SystemLoadAverage=580, ProcessCount=183, AvailableMemoryMB=2988 2023-07-21 11:17:47,616 WARN [Listener at localhost.localdomain/37917] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-21 11:17:47,616 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-21 11:17:47,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:47,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:47,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:47,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:47,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:47,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:47,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:47,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:47,632 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:47,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:47,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:47,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:47,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:47,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37771] to rsgroup master 2023-07-21 11:17:47,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:47,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.CallRunner(144): callId: 135 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:54562 deadline: 1689939467642, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. 2023-07-21 11:17:47,642 WARN [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:47,644 INFO [Listener at localhost.localdomain/37917] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:47,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,645 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33755, jenkins-hbase17.apache.org:36969, jenkins-hbase17.apache.org:37509, jenkins-hbase17.apache.org:39253], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:47,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:47,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:47,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-21 11:17:47,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:47,648 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-21 11:17:47,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-21 11:17:47,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 11:17:47,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:47,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:47,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:47,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:47,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:47,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:47,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:47,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:47,665 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:47,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:47,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:47,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:47,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:47,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37771] to rsgroup master 2023-07-21 11:17:47,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:47,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.CallRunner(144): callId: 170 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:54562 deadline: 1689939467674, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. 2023-07-21 11:17:47,675 WARN [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:47,677 INFO [Listener at localhost.localdomain/37917] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:47,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,678 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33755, jenkins-hbase17.apache.org:36969, jenkins-hbase17.apache.org:37509, jenkins-hbase17.apache.org:39253], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:47,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:47,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:47,698 INFO [Listener at localhost.localdomain/37917] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=574 (was 572) - Thread LEAK? -, OpenFileDescriptor=851 (was 851), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=580 (was 580), ProcessCount=183 (was 183), AvailableMemoryMB=2987 (was 2988) 2023-07-21 11:17:47,698 WARN [Listener at localhost.localdomain/37917] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-21 11:17:47,718 INFO [Listener at localhost.localdomain/37917] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=574, OpenFileDescriptor=851, MaxFileDescriptor=60000, SystemLoadAverage=580, ProcessCount=183, AvailableMemoryMB=2986 2023-07-21 11:17:47,718 WARN [Listener at localhost.localdomain/37917] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-21 11:17:47,718 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-21 11:17:47,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:47,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:47,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:47,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:47,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:47,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:47,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:47,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:47,729 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:47,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:47,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:47,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:47,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:47,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37771] to rsgroup master 2023-07-21 11:17:47,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:47,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.CallRunner(144): callId: 198 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:54562 deadline: 1689939467739, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. 2023-07-21 11:17:47,739 WARN [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:47,740 INFO [Listener at localhost.localdomain/37917] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:47,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,741 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33755, jenkins-hbase17.apache.org:36969, jenkins-hbase17.apache.org:37509, jenkins-hbase17.apache.org:39253], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:47,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:47,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:47,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:47,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:47,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:47,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:47,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:47,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:47,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:47,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:47,754 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:47,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:47,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:47,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:47,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:47,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37771] to rsgroup master 2023-07-21 11:17:47,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:47,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.CallRunner(144): callId: 226 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:54562 deadline: 1689939467762, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. 2023-07-21 11:17:47,763 WARN [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:47,764 INFO [Listener at localhost.localdomain/37917] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:47,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,766 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33755, jenkins-hbase17.apache.org:36969, jenkins-hbase17.apache.org:37509, jenkins-hbase17.apache.org:39253], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:47,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:47,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:47,787 INFO [Listener at localhost.localdomain/37917] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=575 (was 574) - Thread LEAK? -, OpenFileDescriptor=847 (was 851), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=580 (was 580), ProcessCount=183 (was 183), AvailableMemoryMB=2985 (was 2986) 2023-07-21 11:17:47,787 WARN [Listener at localhost.localdomain/37917] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-21 11:17:47,807 INFO [Listener at localhost.localdomain/37917] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=575, OpenFileDescriptor=847, MaxFileDescriptor=60000, SystemLoadAverage=580, ProcessCount=183, AvailableMemoryMB=2985 2023-07-21 11:17:47,807 WARN [Listener at localhost.localdomain/37917] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-21 11:17:47,807 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-21 11:17:47,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:47,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:47,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:47,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:47,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:47,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:47,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:47,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:47,821 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:47,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:47,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:47,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:47,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:47,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37771] to rsgroup master 2023-07-21 11:17:47,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:47,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.CallRunner(144): callId: 254 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:54562 deadline: 1689939467832, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. 2023-07-21 11:17:47,832 WARN [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:47,834 INFO [Listener at localhost.localdomain/37917] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:47,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,835 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33755, jenkins-hbase17.apache.org:36969, jenkins-hbase17.apache.org:37509, jenkins-hbase17.apache.org:39253], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:47,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:47,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:47,836 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-21 11:17:47,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup Group_foo 2023-07-21 11:17:47,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-21 11:17:47,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:47,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:47,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:17:47,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:47,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:47,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:47,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.HMaster$15(3014): Client=jenkins//136.243.18.41 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-21 11:17:47,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-21 11:17:47,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 11:17:47,860 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:17:47,862 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 12 msec 2023-07-21 11:17:47,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 11:17:47,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_foo 2023-07-21 11:17:47,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:47,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.CallRunner(144): callId: 270 service: MasterService methodName: ExecMasterService size: 91 connection: 136.243.18.41:54562 deadline: 1689939467958, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-21 11:17:47,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.HMaster$16(3053): Client=jenkins//136.243.18.41 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-21 11:17:47,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-21 11:17:47,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 11:17:47,977 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 11:17:47,978 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 12 msec 2023-07-21 11:17:48,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 11:17:48,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup Group_anotherGroup 2023-07-21 11:17:48,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-21 11:17:48,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:48,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-21 11:17:48,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:48,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 11:17:48,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:48,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:48,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:48,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.HMaster$17(3086): Client=jenkins//136.243.18.41 delete Group_foo 2023-07-21 11:17:48,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:17:48,091 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:17:48,094 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:17:48,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-21 11:17:48,096 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:17:48,097 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 11:17:48,097 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:17:48,097 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:17:48,099 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:17:48,099 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-21 11:17:48,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-21 11:17:48,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_foo 2023-07-21 11:17:48,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-21 11:17:48,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:48,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:48,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 11:17:48,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:48,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:48,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:48,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:48,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.CallRunner(144): callId: 292 service: MasterService methodName: CreateNamespace size: 70 connection: 136.243.18.41:54562 deadline: 1689938328205, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-21 11:17:48,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:48,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:48,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:48,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:48,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:48,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:48,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:48,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_anotherGroup 2023-07-21 11:17:48,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:48,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:48,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 11:17:48,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:48,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:17:48,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:17:48,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:17:48,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:17:48,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:17:48,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:17:48,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:48,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:17:48,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:17:48,223 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:17:48,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:17:48,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:17:48,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:17:48,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:17:48,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:17:48,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:48,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:48,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37771] to rsgroup master 2023-07-21 11:17:48,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:17:48,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] ipc.CallRunner(144): callId: 322 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:54562 deadline: 1689939468230, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. 2023-07-21 11:17:48,230 WARN [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:37771 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:17:48,232 INFO [Listener at localhost.localdomain/37917] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:17:48,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:17:48,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:17:48,233 INFO [Listener at localhost.localdomain/37917] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33755, jenkins-hbase17.apache.org:36969, jenkins-hbase17.apache.org:37509, jenkins-hbase17.apache.org:39253], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:17:48,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:17:48,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:17:48,251 INFO [Listener at localhost.localdomain/37917] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=574 (was 575), OpenFileDescriptor=847 (was 847), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=580 (was 580), ProcessCount=183 (was 183), AvailableMemoryMB=2982 (was 2985) 2023-07-21 11:17:48,252 WARN [Listener at localhost.localdomain/37917] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-21 11:17:48,252 INFO [Listener at localhost.localdomain/37917] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 11:17:48,252 INFO [Listener at localhost.localdomain/37917] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 11:17:48,252 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x16a0d9cc to 127.0.0.1:54201 2023-07-21 11:17:48,252 DEBUG [Listener at localhost.localdomain/37917] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:48,252 DEBUG [Listener at localhost.localdomain/37917] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 11:17:48,252 DEBUG [Listener at localhost.localdomain/37917] util.JVMClusterUtil(257): Found active master hash=824652773, stopped=false 2023-07-21 11:17:48,252 DEBUG [Listener at localhost.localdomain/37917] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 11:17:48,252 DEBUG [Listener at localhost.localdomain/37917] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 11:17:48,252 INFO [Listener at localhost.localdomain/37917] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,37771,1689938263479 2023-07-21 11:17:48,253 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:48,253 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:48,253 INFO [Listener at localhost.localdomain/37917] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 11:17:48,253 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:48,253 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:48,253 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:17:48,254 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:48,254 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:48,254 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:48,254 DEBUG [Listener at localhost.localdomain/37917] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6bda7a20 to 127.0.0.1:54201 2023-07-21 11:17:48,254 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:48,254 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:48,254 DEBUG [Listener at localhost.localdomain/37917] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:48,254 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:17:48,254 INFO [Listener at localhost.localdomain/37917] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,37509,1689938263634' ***** 2023-07-21 11:17:48,254 INFO [Listener at localhost.localdomain/37917] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:17:48,254 INFO [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:48,255 INFO [Listener at localhost.localdomain/37917] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,39253,1689938263758' ***** 2023-07-21 11:17:48,257 INFO [Listener at localhost.localdomain/37917] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:17:48,258 INFO [Listener at localhost.localdomain/37917] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,36969,1689938263902' ***** 2023-07-21 11:17:48,258 INFO [Listener at localhost.localdomain/37917] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:17:48,258 INFO [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:48,258 INFO [RS:2;jenkins-hbase17:36969] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:48,258 INFO [Listener at localhost.localdomain/37917] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,33755,1689938265741' ***** 2023-07-21 11:17:48,259 INFO [Listener at localhost.localdomain/37917] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:17:48,259 INFO [RS:3;jenkins-hbase17:33755] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:48,264 INFO [RS:0;jenkins-hbase17:37509] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@27c6a71{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:48,265 INFO [RS:1;jenkins-hbase17:39253] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@29b96fd7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:48,265 INFO [RS:2;jenkins-hbase17:36969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@745678b7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:48,266 INFO [RS:1;jenkins-hbase17:39253] server.AbstractConnector(383): Stopped ServerConnector@14467a90{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:48,266 INFO [RS:0;jenkins-hbase17:37509] server.AbstractConnector(383): Stopped ServerConnector@50c059fc{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:48,266 INFO [RS:1;jenkins-hbase17:39253] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:48,266 INFO [RS:2;jenkins-hbase17:36969] server.AbstractConnector(383): Stopped ServerConnector@589eb021{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:48,266 INFO [RS:0;jenkins-hbase17:37509] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:48,266 INFO [RS:3;jenkins-hbase17:33755] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@43c8713c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:17:48,267 INFO [RS:2;jenkins-hbase17:36969] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:48,269 INFO [RS:0;jenkins-hbase17:37509] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@e9037e3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:48,267 INFO [RS:1;jenkins-hbase17:39253] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@dce9de5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:48,270 INFO [RS:2;jenkins-hbase17:36969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@c847ff7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:48,270 INFO [RS:3;jenkins-hbase17:33755] server.AbstractConnector(383): Stopped ServerConnector@7621d2b2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:48,271 INFO [RS:0;jenkins-hbase17:37509] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@299bc8ee{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:48,271 INFO [RS:1;jenkins-hbase17:39253] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e08737b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:48,273 INFO [RS:3;jenkins-hbase17:33755] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:48,273 INFO [RS:2;jenkins-hbase17:36969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@24f42a5b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:48,274 INFO [RS:3;jenkins-hbase17:33755] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2a6666b1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:48,275 INFO [RS:3;jenkins-hbase17:33755] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@388658bf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:48,275 INFO [RS:1;jenkins-hbase17:39253] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:17:48,275 INFO [RS:1;jenkins-hbase17:39253] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:17:48,275 INFO [RS:1;jenkins-hbase17:39253] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:17:48,275 INFO [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(3305): Received CLOSE for 6569624467c6eda0eba15844d5358c3c 2023-07-21 11:17:48,276 INFO [RS:2;jenkins-hbase17:36969] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:17:48,276 INFO [RS:2;jenkins-hbase17:36969] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:17:48,276 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:17:48,276 INFO [RS:2;jenkins-hbase17:36969] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:17:48,276 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:17:48,276 INFO [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:48,276 INFO [RS:2;jenkins-hbase17:36969] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:48,276 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 6569624467c6eda0eba15844d5358c3c, disabling compactions & flushes 2023-07-21 11:17:48,276 DEBUG [RS:1;jenkins-hbase17:39253] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x32171ab7 to 127.0.0.1:54201 2023-07-21 11:17:48,276 INFO [RS:3;jenkins-hbase17:33755] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:17:48,277 INFO [RS:0;jenkins-hbase17:37509] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:17:48,277 INFO [RS:3;jenkins-hbase17:33755] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:17:48,277 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:17:48,277 DEBUG [RS:1;jenkins-hbase17:39253] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:48,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. 2023-07-21 11:17:48,276 DEBUG [RS:2;jenkins-hbase17:36969] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0ad294e3 to 127.0.0.1:54201 2023-07-21 11:17:48,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. 2023-07-21 11:17:48,277 INFO [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 11:17:48,277 INFO [RS:3;jenkins-hbase17:33755] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:17:48,277 INFO [RS:3;jenkins-hbase17:33755] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,33755,1689938265741 2023-07-21 11:17:48,277 INFO [RS:0;jenkins-hbase17:37509] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:17:48,277 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:17:48,277 INFO [RS:0;jenkins-hbase17:37509] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:17:48,277 DEBUG [RS:3;jenkins-hbase17:33755] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x258257f1 to 127.0.0.1:54201 2023-07-21 11:17:48,278 DEBUG [RS:3;jenkins-hbase17:33755] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:48,277 DEBUG [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(1478): Online Regions={6569624467c6eda0eba15844d5358c3c=hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c.} 2023-07-21 11:17:48,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. after waiting 0 ms 2023-07-21 11:17:48,277 DEBUG [RS:2;jenkins-hbase17:36969] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:48,278 DEBUG [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(1504): Waiting on 6569624467c6eda0eba15844d5358c3c 2023-07-21 11:17:48,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. 2023-07-21 11:17:48,278 INFO [RS:3;jenkins-hbase17:33755] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,33755,1689938265741; all regions closed. 2023-07-21 11:17:48,277 INFO [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(3305): Received CLOSE for 5437b1c01d72b0da90c7ad3989ee4c8c 2023-07-21 11:17:48,278 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 6569624467c6eda0eba15844d5358c3c 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-21 11:17:48,278 INFO [RS:2;jenkins-hbase17:36969] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,36969,1689938263902; all regions closed. 2023-07-21 11:17:48,282 INFO [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:48,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 5437b1c01d72b0da90c7ad3989ee4c8c, disabling compactions & flushes 2023-07-21 11:17:48,282 DEBUG [RS:0;jenkins-hbase17:37509] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x42f1f520 to 127.0.0.1:54201 2023-07-21 11:17:48,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. 2023-07-21 11:17:48,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. 2023-07-21 11:17:48,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. after waiting 0 ms 2023-07-21 11:17:48,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. 2023-07-21 11:17:48,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 5437b1c01d72b0da90c7ad3989ee4c8c 1/1 column families, dataSize=6.53 KB heapSize=10.82 KB 2023-07-21 11:17:48,282 DEBUG [RS:0;jenkins-hbase17:37509] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:48,283 INFO [RS:0;jenkins-hbase17:37509] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:17:48,285 INFO [RS:0;jenkins-hbase17:37509] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:17:48,285 INFO [RS:0;jenkins-hbase17:37509] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:17:48,285 INFO [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 11:17:48,286 INFO [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 11:17:48,286 DEBUG [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(1478): Online Regions={5437b1c01d72b0da90c7ad3989ee4c8c=hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c., 1588230740=hbase:meta,,1.1588230740} 2023-07-21 11:17:48,286 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 11:17:48,286 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 11:17:48,286 DEBUG [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(1504): Waiting on 1588230740, 5437b1c01d72b0da90c7ad3989ee4c8c 2023-07-21 11:17:48,286 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 11:17:48,287 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 11:17:48,287 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 11:17:48,287 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.82 KB 2023-07-21 11:17:48,288 DEBUG [RS:2;jenkins-hbase17:36969] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/oldWALs 2023-07-21 11:17:48,288 INFO [RS:2;jenkins-hbase17:36969] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C36969%2C1689938263902:(num 1689938264425) 2023-07-21 11:17:48,288 DEBUG [RS:2;jenkins-hbase17:36969] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:48,288 INFO [RS:2;jenkins-hbase17:36969] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:48,288 INFO [RS:2;jenkins-hbase17:36969] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 11:17:48,289 INFO [RS:2;jenkins-hbase17:36969] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:17:48,289 INFO [RS:2;jenkins-hbase17:36969] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:17:48,289 INFO [RS:2;jenkins-hbase17:36969] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:17:48,290 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:48,290 INFO [RS:2;jenkins-hbase17:36969] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:36969 2023-07-21 11:17:48,291 DEBUG [RS:3;jenkins-hbase17:33755] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/oldWALs 2023-07-21 11:17:48,291 INFO [RS:3;jenkins-hbase17:33755] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C33755%2C1689938265741:(num 1689938266143) 2023-07-21 11:17:48,291 DEBUG [RS:3;jenkins-hbase17:33755] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:48,291 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:48,291 INFO [RS:3;jenkins-hbase17:33755] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:48,291 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:48,291 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:48,291 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:48,291 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:48,291 INFO [RS:3;jenkins-hbase17:33755] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:17:48,291 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:48,291 INFO [RS:3;jenkins-hbase17:33755] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:17:48,291 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:48,291 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:48,291 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36969,1689938263902 2023-07-21 11:17:48,292 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,36969,1689938263902] 2023-07-21 11:17:48,292 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:48,291 INFO [RS:3;jenkins-hbase17:33755] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:17:48,292 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,36969,1689938263902; numProcessing=1 2023-07-21 11:17:48,292 INFO [RS:3;jenkins-hbase17:33755] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:17:48,293 INFO [RS:3;jenkins-hbase17:33755] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:33755 2023-07-21 11:17:48,294 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,36969,1689938263902 already deleted, retry=false 2023-07-21 11:17:48,294 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,36969,1689938263902 expired; onlineServers=3 2023-07-21 11:17:48,294 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 11:17:48,294 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 11:17:48,294 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33755,1689938265741 2023-07-21 11:17:48,294 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33755,1689938265741 2023-07-21 11:17:48,295 ERROR [Listener at localhost.localdomain/37917-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@6293267e rejected from java.util.concurrent.ThreadPoolExecutor@66acbef4[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-21 11:17:48,296 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:48,296 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33755,1689938265741 2023-07-21 11:17:48,296 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,33755,1689938265741] 2023-07-21 11:17:48,296 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,33755,1689938265741; numProcessing=2 2023-07-21 11:17:48,297 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,33755,1689938265741 already deleted, retry=false 2023-07-21 11:17:48,297 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,33755,1689938265741 expired; onlineServers=2 2023-07-21 11:17:48,305 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:48,305 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:48,305 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:48,315 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/namespace/6569624467c6eda0eba15844d5358c3c/.tmp/info/b71a7577e0f24a1eb15fe7ea52f8d66b 2023-07-21 11:17:48,320 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b71a7577e0f24a1eb15fe7ea52f8d66b 2023-07-21 11:17:48,321 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/namespace/6569624467c6eda0eba15844d5358c3c/.tmp/info/b71a7577e0f24a1eb15fe7ea52f8d66b as hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/namespace/6569624467c6eda0eba15844d5358c3c/info/b71a7577e0f24a1eb15fe7ea52f8d66b 2023-07-21 11:17:48,324 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:48,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.53 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/rsgroup/5437b1c01d72b0da90c7ad3989ee4c8c/.tmp/m/e8cc74b7998e48f6af3b306b9517a26e 2023-07-21 11:17:48,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b71a7577e0f24a1eb15fe7ea52f8d66b 2023-07-21 11:17:48,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/namespace/6569624467c6eda0eba15844d5358c3c/info/b71a7577e0f24a1eb15fe7ea52f8d66b, entries=3, sequenceid=9, filesize=5.0 K 2023-07-21 11:17:48,335 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 6569624467c6eda0eba15844d5358c3c in 57ms, sequenceid=9, compaction requested=false 2023-07-21 11:17:48,340 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/.tmp/info/117d726893cd4dac9bb637f94d99e886 2023-07-21 11:17:48,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e8cc74b7998e48f6af3b306b9517a26e 2023-07-21 11:17:48,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/rsgroup/5437b1c01d72b0da90c7ad3989ee4c8c/.tmp/m/e8cc74b7998e48f6af3b306b9517a26e as hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/rsgroup/5437b1c01d72b0da90c7ad3989ee4c8c/m/e8cc74b7998e48f6af3b306b9517a26e 2023-07-21 11:17:48,343 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/namespace/6569624467c6eda0eba15844d5358c3c/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-21 11:17:48,346 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. 2023-07-21 11:17:48,346 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 6569624467c6eda0eba15844d5358c3c: 2023-07-21 11:17:48,346 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689938264594.6569624467c6eda0eba15844d5358c3c. 2023-07-21 11:17:48,347 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 117d726893cd4dac9bb637f94d99e886 2023-07-21 11:17:48,349 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e8cc74b7998e48f6af3b306b9517a26e 2023-07-21 11:17:48,349 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/rsgroup/5437b1c01d72b0da90c7ad3989ee4c8c/m/e8cc74b7998e48f6af3b306b9517a26e, entries=12, sequenceid=29, filesize=5.5 K 2023-07-21 11:17:48,350 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.53 KB/6685, heapSize ~10.80 KB/11064, currentSize=0 B/0 for 5437b1c01d72b0da90c7ad3989ee4c8c in 68ms, sequenceid=29, compaction requested=false 2023-07-21 11:17:48,358 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 11:17:48,358 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 11:17:48,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/rsgroup/5437b1c01d72b0da90c7ad3989ee4c8c/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-21 11:17:48,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:17:48,372 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. 2023-07-21 11:17:48,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 5437b1c01d72b0da90c7ad3989ee4c8c: 2023-07-21 11:17:48,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689938264797.5437b1c01d72b0da90c7ad3989ee4c8c. 2023-07-21 11:17:48,375 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/.tmp/rep_barrier/b646ababa41841a3ae1472f694c4c6b6 2023-07-21 11:17:48,381 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b646ababa41841a3ae1472f694c4c6b6 2023-07-21 11:17:48,411 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/.tmp/table/1b19c00864294a19b9b04666776dff22 2023-07-21 11:17:48,416 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1b19c00864294a19b9b04666776dff22 2023-07-21 11:17:48,418 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/.tmp/info/117d726893cd4dac9bb637f94d99e886 as hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/info/117d726893cd4dac9bb637f94d99e886 2023-07-21 11:17:48,423 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 117d726893cd4dac9bb637f94d99e886 2023-07-21 11:17:48,423 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/info/117d726893cd4dac9bb637f94d99e886, entries=22, sequenceid=26, filesize=7.3 K 2023-07-21 11:17:48,424 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/.tmp/rep_barrier/b646ababa41841a3ae1472f694c4c6b6 as hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/rep_barrier/b646ababa41841a3ae1472f694c4c6b6 2023-07-21 11:17:48,431 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b646ababa41841a3ae1472f694c4c6b6 2023-07-21 11:17:48,432 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/rep_barrier/b646ababa41841a3ae1472f694c4c6b6, entries=1, sequenceid=26, filesize=4.9 K 2023-07-21 11:17:48,433 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/.tmp/table/1b19c00864294a19b9b04666776dff22 as hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/table/1b19c00864294a19b9b04666776dff22 2023-07-21 11:17:48,440 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1b19c00864294a19b9b04666776dff22 2023-07-21 11:17:48,440 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/table/1b19c00864294a19b9b04666776dff22, entries=6, sequenceid=26, filesize=5.1 K 2023-07-21 11:17:48,441 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4621, heapSize ~8.77 KB/8984, currentSize=0 B/0 for 1588230740 in 154ms, sequenceid=26, compaction requested=false 2023-07-21 11:17:48,453 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:48,453 INFO [RS:3;jenkins-hbase17:33755] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,33755,1689938265741; zookeeper connection closed. 2023-07-21 11:17:48,453 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:33755-0x1018798f8bd000b, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:48,454 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@72dcc091] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@72dcc091 2023-07-21 11:17:48,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-21 11:17:48,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:17:48,455 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 11:17:48,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 11:17:48,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 11:17:48,478 INFO [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,39253,1689938263758; all regions closed. 2023-07-21 11:17:48,486 DEBUG [RS:1;jenkins-hbase17:39253] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/oldWALs 2023-07-21 11:17:48,486 INFO [RS:1;jenkins-hbase17:39253] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C39253%2C1689938263758:(num 1689938264439) 2023-07-21 11:17:48,486 DEBUG [RS:1;jenkins-hbase17:39253] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:48,486 INFO [RS:1;jenkins-hbase17:39253] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:48,486 INFO [RS:1;jenkins-hbase17:39253] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 11:17:48,486 INFO [RS:1;jenkins-hbase17:39253] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:17:48,486 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:48,486 INFO [RS:1;jenkins-hbase17:39253] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:17:48,486 INFO [RS:1;jenkins-hbase17:39253] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:17:48,487 INFO [RS:1;jenkins-hbase17:39253] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:39253 2023-07-21 11:17:48,487 INFO [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,37509,1689938263634; all regions closed. 2023-07-21 11:17:48,489 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:48,489 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:48,489 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,39253,1689938263758 2023-07-21 11:17:48,492 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,39253,1689938263758] 2023-07-21 11:17:48,492 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,39253,1689938263758; numProcessing=3 2023-07-21 11:17:48,493 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,39253,1689938263758 already deleted, retry=false 2023-07-21 11:17:48,493 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,39253,1689938263758 expired; onlineServers=1 2023-07-21 11:17:48,494 DEBUG [RS:0;jenkins-hbase17:37509] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/oldWALs 2023-07-21 11:17:48,494 INFO [RS:0;jenkins-hbase17:37509] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C37509%2C1689938263634.meta:.meta(num 1689938264528) 2023-07-21 11:17:48,500 DEBUG [RS:0;jenkins-hbase17:37509] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/oldWALs 2023-07-21 11:17:48,500 INFO [RS:0;jenkins-hbase17:37509] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C37509%2C1689938263634:(num 1689938264422) 2023-07-21 11:17:48,500 DEBUG [RS:0;jenkins-hbase17:37509] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:48,501 INFO [RS:0;jenkins-hbase17:37509] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:17:48,501 INFO [RS:0;jenkins-hbase17:37509] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:17:48,501 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:48,502 INFO [RS:0;jenkins-hbase17:37509] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:37509 2023-07-21 11:17:48,553 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:48,553 INFO [RS:2;jenkins-hbase17:36969] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,36969,1689938263902; zookeeper connection closed. 2023-07-21 11:17:48,554 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:36969-0x1018798f8bd0003, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:48,554 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@66569c24] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@66569c24 2023-07-21 11:17:48,593 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:48,593 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018798f8bd0002, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:48,593 INFO [RS:1;jenkins-hbase17:39253] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,39253,1689938263758; zookeeper connection closed. 2023-07-21 11:17:48,593 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@728f95d7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@728f95d7 2023-07-21 11:17:48,593 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,37509,1689938263634 2023-07-21 11:17:48,594 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:17:48,594 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,37509,1689938263634] 2023-07-21 11:17:48,595 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,37509,1689938263634; numProcessing=4 2023-07-21 11:17:48,595 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,37509,1689938263634 already deleted, retry=false 2023-07-21 11:17:48,595 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,37509,1689938263634 expired; onlineServers=0 2023-07-21 11:17:48,595 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,37771,1689938263479' ***** 2023-07-21 11:17:48,595 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 11:17:48,596 DEBUG [M:0;jenkins-hbase17:37771] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3518a119, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:17:48,596 INFO [M:0;jenkins-hbase17:37771] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:17:48,598 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 11:17:48,600 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:17:48,601 INFO [M:0;jenkins-hbase17:37771] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@529338f3{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 11:17:48,601 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:17:48,601 INFO [M:0;jenkins-hbase17:37771] server.AbstractConnector(383): Stopped ServerConnector@5c5ae6a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:48,602 INFO [M:0;jenkins-hbase17:37771] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:17:48,603 INFO [M:0;jenkins-hbase17:37771] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10ebf347{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:17:48,603 INFO [M:0;jenkins-hbase17:37771] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@dbcf554{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/hadoop.log.dir/,STOPPED} 2023-07-21 11:17:48,604 INFO [M:0;jenkins-hbase17:37771] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,37771,1689938263479 2023-07-21 11:17:48,604 INFO [M:0;jenkins-hbase17:37771] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,37771,1689938263479; all regions closed. 2023-07-21 11:17:48,604 DEBUG [M:0;jenkins-hbase17:37771] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:17:48,604 INFO [M:0;jenkins-hbase17:37771] master.HMaster(1491): Stopping master jetty server 2023-07-21 11:17:48,605 INFO [M:0;jenkins-hbase17:37771] server.AbstractConnector(383): Stopped ServerConnector@47fec777{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:17:48,605 DEBUG [M:0;jenkins-hbase17:37771] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 11:17:48,605 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 11:17:48,605 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938264166] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938264166,5,FailOnTimeoutGroup] 2023-07-21 11:17:48,605 DEBUG [M:0;jenkins-hbase17:37771] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 11:17:48,605 INFO [M:0;jenkins-hbase17:37771] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 11:17:48,605 INFO [M:0;jenkins-hbase17:37771] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 11:17:48,605 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938264166] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938264166,5,FailOnTimeoutGroup] 2023-07-21 11:17:48,606 INFO [M:0;jenkins-hbase17:37771] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-07-21 11:17:48,606 DEBUG [M:0;jenkins-hbase17:37771] master.HMaster(1512): Stopping service threads 2023-07-21 11:17:48,606 INFO [M:0;jenkins-hbase17:37771] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 11:17:48,606 ERROR [M:0;jenkins-hbase17:37771] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 11:17:48,606 INFO [M:0;jenkins-hbase17:37771] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 11:17:48,606 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 11:17:48,607 DEBUG [M:0;jenkins-hbase17:37771] zookeeper.ZKUtil(398): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 11:17:48,607 WARN [M:0;jenkins-hbase17:37771] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 11:17:48,607 INFO [M:0;jenkins-hbase17:37771] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 11:17:48,607 INFO [M:0;jenkins-hbase17:37771] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 11:17:48,607 DEBUG [M:0;jenkins-hbase17:37771] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 11:17:48,607 INFO [M:0;jenkins-hbase17:37771] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:48,607 DEBUG [M:0;jenkins-hbase17:37771] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:48,607 DEBUG [M:0;jenkins-hbase17:37771] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 11:17:48,607 DEBUG [M:0;jenkins-hbase17:37771] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:48,607 INFO [M:0;jenkins-hbase17:37771] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.28 KB heapSize=90.73 KB 2023-07-21 11:17:49,028 INFO [M:0;jenkins-hbase17:37771] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.28 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0f2dcaa686774d0a87ea4acaac1876c9 2023-07-21 11:17:49,034 DEBUG [M:0;jenkins-hbase17:37771] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0f2dcaa686774d0a87ea4acaac1876c9 as hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0f2dcaa686774d0a87ea4acaac1876c9 2023-07-21 11:17:49,040 INFO [M:0;jenkins-hbase17:37771] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39155/user/jenkins/test-data/286f2495-e387-be5a-1ec7-b1ee938ef59a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0f2dcaa686774d0a87ea4acaac1876c9, entries=22, sequenceid=175, filesize=11.1 K 2023-07-21 11:17:49,041 INFO [M:0;jenkins-hbase17:37771] regionserver.HRegion(2948): Finished flush of dataSize ~76.28 KB/78108, heapSize ~90.71 KB/92888, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 434ms, sequenceid=175, compaction requested=false 2023-07-21 11:17:49,043 INFO [M:0;jenkins-hbase17:37771] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:17:49,043 DEBUG [M:0;jenkins-hbase17:37771] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:17:49,047 INFO [M:0;jenkins-hbase17:37771] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 11:17:49,047 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:17:49,048 INFO [M:0;jenkins-hbase17:37771] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:37771 2023-07-21 11:17:49,048 DEBUG [M:0;jenkins-hbase17:37771] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,37771,1689938263479 already deleted, retry=false 2023-07-21 11:17:49,155 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:49,155 INFO [M:0;jenkins-hbase17:37771] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,37771,1689938263479; zookeeper connection closed. 2023-07-21 11:17:49,155 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): master:37771-0x1018798f8bd0000, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:49,255 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:49,255 INFO [RS:0;jenkins-hbase17:37509] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,37509,1689938263634; zookeeper connection closed. 2023-07-21 11:17:49,256 DEBUG [Listener at localhost.localdomain/37917-EventThread] zookeeper.ZKWatcher(600): regionserver:37509-0x1018798f8bd0001, quorum=127.0.0.1:54201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:17:49,256 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5711a10e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5711a10e 2023-07-21 11:17:49,256 INFO [Listener at localhost.localdomain/37917] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-21 11:17:49,256 WARN [Listener at localhost.localdomain/37917] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 11:17:49,260 INFO [Listener at localhost.localdomain/37917] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 11:17:49,363 WARN [BP-1487723784-136.243.18.41-1689938262829 heartbeating to localhost.localdomain/127.0.0.1:39155] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 11:17:49,363 WARN [BP-1487723784-136.243.18.41-1689938262829 heartbeating to localhost.localdomain/127.0.0.1:39155] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1487723784-136.243.18.41-1689938262829 (Datanode Uuid a10b7906-dbda-47a3-bcee-c9bf422e15ff) service to localhost.localdomain/127.0.0.1:39155 2023-07-21 11:17:49,364 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data5/current/BP-1487723784-136.243.18.41-1689938262829] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:49,364 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data6/current/BP-1487723784-136.243.18.41-1689938262829] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:49,366 WARN [Listener at localhost.localdomain/37917] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 11:17:49,374 INFO [Listener at localhost.localdomain/37917] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 11:17:49,477 WARN [BP-1487723784-136.243.18.41-1689938262829 heartbeating to localhost.localdomain/127.0.0.1:39155] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 11:17:49,477 WARN [BP-1487723784-136.243.18.41-1689938262829 heartbeating to localhost.localdomain/127.0.0.1:39155] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1487723784-136.243.18.41-1689938262829 (Datanode Uuid 49b97863-398f-442f-bf84-5540d8083894) service to localhost.localdomain/127.0.0.1:39155 2023-07-21 11:17:49,478 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data3/current/BP-1487723784-136.243.18.41-1689938262829] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:49,478 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data4/current/BP-1487723784-136.243.18.41-1689938262829] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:49,482 WARN [Listener at localhost.localdomain/37917] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 11:17:49,497 INFO [Listener at localhost.localdomain/37917] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 11:17:49,500 WARN [BP-1487723784-136.243.18.41-1689938262829 heartbeating to localhost.localdomain/127.0.0.1:39155] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 11:17:49,501 WARN [BP-1487723784-136.243.18.41-1689938262829 heartbeating to localhost.localdomain/127.0.0.1:39155] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1487723784-136.243.18.41-1689938262829 (Datanode Uuid ae7bca27-414c-45ea-8ecc-45f3a7f89c08) service to localhost.localdomain/127.0.0.1:39155 2023-07-21 11:17:49,502 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data1/current/BP-1487723784-136.243.18.41-1689938262829] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:49,502 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c7374851-7b9c-469e-137e-aa188ac8e2e0/cluster_db4c359e-8cf9-9715-c4da-b5269670a5a5/dfs/data/data2/current/BP-1487723784-136.243.18.41-1689938262829] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:17:49,519 INFO [Listener at localhost.localdomain/37917] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-21 11:17:49,639 INFO [Listener at localhost.localdomain/37917] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 11:17:49,677 INFO [Listener at localhost.localdomain/37917] hbase.HBaseTestingUtility(1293): Minicluster is down