2023-07-12 10:57:57,057 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497 2023-07-12 10:57:57,080 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics timeout: 13 mins 2023-07-12 10:57:57,100 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 10:57:57,101 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808, deleteOnExit=true 2023-07-12 10:57:57,101 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 10:57:57,102 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/test.cache.data in system properties and HBase conf 2023-07-12 10:57:57,102 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 10:57:57,103 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir in system properties and HBase conf 2023-07-12 10:57:57,103 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 10:57:57,104 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 10:57:57,104 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 10:57:57,222 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-12 10:57:57,665 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 10:57:57,671 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 10:57:57,671 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 10:57:57,672 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 10:57:57,672 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 10:57:57,673 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 10:57:57,673 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 10:57:57,674 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 10:57:57,674 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 10:57:57,674 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 10:57:57,675 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/nfs.dump.dir in system properties and HBase conf 2023-07-12 10:57:57,675 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir in system properties and HBase conf 2023-07-12 10:57:57,675 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 10:57:57,676 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 10:57:57,676 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 10:57:58,283 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 10:57:58,288 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 10:57:58,611 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-12 10:57:58,814 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-12 10:57:58,831 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:57:58,869 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:57:58,910 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/Jetty_localhost_41031_hdfs____71p7kr/webapp 2023-07-12 10:57:59,083 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41031 2023-07-12 10:57:59,095 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 10:57:59,095 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 10:57:59,583 WARN [Listener at localhost/42757] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:57:59,690 WARN [Listener at localhost/42757] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 10:57:59,716 WARN [Listener at localhost/42757] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:57:59,724 INFO [Listener at localhost/42757] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:57:59,732 INFO [Listener at localhost/42757] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/Jetty_localhost_41575_datanode____.ognmxv/webapp 2023-07-12 10:57:59,846 INFO [Listener at localhost/42757] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41575 2023-07-12 10:58:00,274 WARN [Listener at localhost/43233] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:58:00,320 WARN [Listener at localhost/43233] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 10:58:00,326 WARN [Listener at localhost/43233] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:58:00,328 INFO [Listener at localhost/43233] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:58:00,338 INFO [Listener at localhost/43233] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/Jetty_localhost_45819_datanode____.npow2i/webapp 2023-07-12 10:58:00,479 INFO [Listener at localhost/43233] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45819 2023-07-12 10:58:00,507 WARN [Listener at localhost/37379] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:58:00,560 WARN [Listener at localhost/37379] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 10:58:00,563 WARN [Listener at localhost/37379] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:58:00,565 INFO [Listener at localhost/37379] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:58:00,577 INFO [Listener at localhost/37379] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/Jetty_localhost_42283_datanode____.zbsw16/webapp 2023-07-12 10:58:00,698 INFO [Listener at localhost/37379] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42283 2023-07-12 10:58:00,726 WARN [Listener at localhost/44831] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:58:00,896 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x43479b57c2953fed: Processing first storage report for DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73 from datanode 0b8e6506-b2d3-4f82-af25-88926a9a69f5 2023-07-12 10:58:00,898 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x43479b57c2953fed: from storage DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73 node DatanodeRegistration(127.0.0.1:40977, datanodeUuid=0b8e6506-b2d3-4f82-af25-88926a9a69f5, infoPort=46527, infoSecurePort=0, ipcPort=44831, storageInfo=lv=-57;cid=testClusterID;nsid=1557473941;c=1689159478370), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-12 10:58:00,898 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd6c3a918049505d6: Processing first storage report for DS-18996c26-134b-4ae1-9bfa-bd02893d59d3 from datanode b3713489-402d-4b1a-a017-e520575ddeaf 2023-07-12 10:58:00,898 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd6c3a918049505d6: from storage DS-18996c26-134b-4ae1-9bfa-bd02893d59d3 node DatanodeRegistration(127.0.0.1:36995, datanodeUuid=b3713489-402d-4b1a-a017-e520575ddeaf, infoPort=43125, infoSecurePort=0, ipcPort=43233, storageInfo=lv=-57;cid=testClusterID;nsid=1557473941;c=1689159478370), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:00,899 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd6c3a918049505d6: Processing first storage report for DS-a2738947-04f1-41cd-ba2e-c49902e6daaa from datanode b3713489-402d-4b1a-a017-e520575ddeaf 2023-07-12 10:58:00,899 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd6c3a918049505d6: from storage DS-a2738947-04f1-41cd-ba2e-c49902e6daaa node DatanodeRegistration(127.0.0.1:36995, datanodeUuid=b3713489-402d-4b1a-a017-e520575ddeaf, infoPort=43125, infoSecurePort=0, ipcPort=43233, storageInfo=lv=-57;cid=testClusterID;nsid=1557473941;c=1689159478370), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:00,900 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x43479b57c2953fed: Processing first storage report for DS-1780b8f5-7cca-4e51-be5e-194084843a4b from datanode 0b8e6506-b2d3-4f82-af25-88926a9a69f5 2023-07-12 10:58:00,900 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x43479b57c2953fed: from storage DS-1780b8f5-7cca-4e51-be5e-194084843a4b node DatanodeRegistration(127.0.0.1:40977, datanodeUuid=0b8e6506-b2d3-4f82-af25-88926a9a69f5, infoPort=46527, infoSecurePort=0, ipcPort=44831, storageInfo=lv=-57;cid=testClusterID;nsid=1557473941;c=1689159478370), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:00,900 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5a9b25660efe77e3: Processing first storage report for DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214 from datanode 8b34f536-f3b6-4e8c-8608-ad66bf3bae1d 2023-07-12 10:58:00,900 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5a9b25660efe77e3: from storage DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214 node DatanodeRegistration(127.0.0.1:44321, datanodeUuid=8b34f536-f3b6-4e8c-8608-ad66bf3bae1d, infoPort=41957, infoSecurePort=0, ipcPort=37379, storageInfo=lv=-57;cid=testClusterID;nsid=1557473941;c=1689159478370), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 10:58:00,902 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5a9b25660efe77e3: Processing first storage report for DS-906dc7f0-4fff-4fc0-b75f-0503297d6173 from datanode 8b34f536-f3b6-4e8c-8608-ad66bf3bae1d 2023-07-12 10:58:00,902 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5a9b25660efe77e3: from storage DS-906dc7f0-4fff-4fc0-b75f-0503297d6173 node DatanodeRegistration(127.0.0.1:44321, datanodeUuid=8b34f536-f3b6-4e8c-8608-ad66bf3bae1d, infoPort=41957, infoSecurePort=0, ipcPort=37379, storageInfo=lv=-57;cid=testClusterID;nsid=1557473941;c=1689159478370), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 10:58:01,133 DEBUG [Listener at localhost/44831] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497 2023-07-12 10:58:01,219 INFO [Listener at localhost/44831] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/zookeeper_0, clientPort=49301, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 10:58:01,234 INFO [Listener at localhost/44831] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=49301 2023-07-12 10:58:01,243 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:01,246 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:01,970 INFO [Listener at localhost/44831] util.FSUtils(471): Created version file at hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 with version=8 2023-07-12 10:58:01,971 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/hbase-staging 2023-07-12 10:58:01,980 DEBUG [Listener at localhost/44831] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 10:58:01,980 DEBUG [Listener at localhost/44831] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 10:58:01,980 DEBUG [Listener at localhost/44831] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 10:58:01,980 DEBUG [Listener at localhost/44831] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 10:58:02,358 INFO [Listener at localhost/44831] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-12 10:58:03,029 INFO [Listener at localhost/44831] client.ConnectionUtils(127): master/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:03,072 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:03,073 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:03,073 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:03,073 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:03,074 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:03,239 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:03,320 DEBUG [Listener at localhost/44831] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-12 10:58:03,441 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:41017 2023-07-12 10:58:03,457 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:03,459 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:03,486 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41017 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:03,536 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:410170x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:03,548 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41017-0x1015920fb080000 connected 2023-07-12 10:58:03,591 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:03,592 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:03,597 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:03,607 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41017 2023-07-12 10:58:03,608 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41017 2023-07-12 10:58:03,608 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41017 2023-07-12 10:58:03,608 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41017 2023-07-12 10:58:03,609 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41017 2023-07-12 10:58:03,643 INFO [Listener at localhost/44831] log.Log(170): Logging initialized @7335ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-12 10:58:03,790 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:03,791 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:03,792 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:03,794 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 10:58:03,794 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:03,794 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:03,798 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:03,861 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 35301 2023-07-12 10:58:03,864 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:03,955 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:03,959 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7b825da1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:03,960 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:03,960 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1c693181{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:04,160 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:04,176 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:04,176 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:04,179 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:04,189 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:04,218 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@18ce6625{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-35301-hbase-server-2_4_18-SNAPSHOT_jar-_-any-9123496052159425900/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 10:58:04,230 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@79ad9a6d{HTTP/1.1, (http/1.1)}{0.0.0.0:35301} 2023-07-12 10:58:04,230 INFO [Listener at localhost/44831] server.Server(415): Started @7923ms 2023-07-12 10:58:04,234 INFO [Listener at localhost/44831] master.HMaster(444): hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5, hbase.cluster.distributed=false 2023-07-12 10:58:04,336 INFO [Listener at localhost/44831] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:04,337 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:04,337 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:04,337 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:04,337 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:04,338 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:04,343 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:04,346 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:42501 2023-07-12 10:58:04,348 INFO [Listener at localhost/44831] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:04,356 DEBUG [Listener at localhost/44831] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:04,357 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:04,360 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:04,362 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42501 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:04,366 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:425010x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:04,367 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:425010x0, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:04,368 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:425010x0, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:04,369 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42501-0x1015920fb080001 connected 2023-07-12 10:58:04,370 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:04,370 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42501 2023-07-12 10:58:04,370 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42501 2023-07-12 10:58:04,371 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42501 2023-07-12 10:58:04,371 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42501 2023-07-12 10:58:04,372 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42501 2023-07-12 10:58:04,374 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:04,374 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:04,374 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:04,375 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:04,376 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:04,376 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:04,376 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:04,378 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 43647 2023-07-12 10:58:04,378 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:04,382 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:04,382 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7f13c5ce{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:04,382 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:04,383 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4b249fe8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:04,503 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:04,504 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:04,505 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:04,505 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:04,506 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:04,511 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5801de9e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-43647-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2184739607480107456/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:04,512 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@9ce6c0e{HTTP/1.1, (http/1.1)}{0.0.0.0:43647} 2023-07-12 10:58:04,512 INFO [Listener at localhost/44831] server.Server(415): Started @8205ms 2023-07-12 10:58:04,527 INFO [Listener at localhost/44831] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:04,527 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:04,527 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:04,527 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:04,528 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:04,528 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:04,528 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:04,530 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:39623 2023-07-12 10:58:04,530 INFO [Listener at localhost/44831] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:04,531 DEBUG [Listener at localhost/44831] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:04,532 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:04,534 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:04,535 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39623 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:04,539 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:396230x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:04,541 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39623-0x1015920fb080002 connected 2023-07-12 10:58:04,541 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:04,542 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:04,543 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:04,546 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39623 2023-07-12 10:58:04,546 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39623 2023-07-12 10:58:04,546 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39623 2023-07-12 10:58:04,554 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39623 2023-07-12 10:58:04,554 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39623 2023-07-12 10:58:04,556 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:04,556 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:04,557 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:04,557 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:04,557 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:04,557 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:04,558 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:04,558 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 43103 2023-07-12 10:58:04,559 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:04,562 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:04,562 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4da0451f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:04,562 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:04,563 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6f2ec142{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:04,695 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:04,696 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:04,696 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:04,696 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:58:04,698 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:04,699 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@40dc144b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-43103-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8998703780721372999/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:04,700 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@6786282b{HTTP/1.1, (http/1.1)}{0.0.0.0:43103} 2023-07-12 10:58:04,700 INFO [Listener at localhost/44831] server.Server(415): Started @8393ms 2023-07-12 10:58:04,714 INFO [Listener at localhost/44831] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:04,714 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:04,714 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:04,715 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:04,715 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:04,715 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:04,715 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:04,723 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:45597 2023-07-12 10:58:04,724 INFO [Listener at localhost/44831] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:04,727 DEBUG [Listener at localhost/44831] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:04,729 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:04,730 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:04,731 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45597 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:04,749 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:455970x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:04,752 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45597-0x1015920fb080003 connected 2023-07-12 10:58:04,757 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:04,759 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:04,760 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:04,769 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45597 2023-07-12 10:58:04,780 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45597 2023-07-12 10:58:04,781 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45597 2023-07-12 10:58:04,782 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45597 2023-07-12 10:58:04,782 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45597 2023-07-12 10:58:04,785 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:04,786 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:04,786 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:04,787 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:04,787 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:04,787 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:04,787 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:04,788 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 34447 2023-07-12 10:58:04,789 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:04,794 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:04,794 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4a70f486{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:04,795 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:04,795 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7bebd089{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:04,911 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:04,912 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:04,913 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:04,913 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:04,914 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:04,915 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4fed8179{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-34447-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8551647729556146398/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:04,916 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@100e6ca1{HTTP/1.1, (http/1.1)}{0.0.0.0:34447} 2023-07-12 10:58:04,916 INFO [Listener at localhost/44831] server.Server(415): Started @8608ms 2023-07-12 10:58:04,922 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:04,926 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@8d15999{HTTP/1.1, (http/1.1)}{0.0.0.0:42747} 2023-07-12 10:58:04,926 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(415): Started @8619ms 2023-07-12 10:58:04,926 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase9.apache.org,41017,1689159482181 2023-07-12 10:58:04,941 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 10:58:04,942 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase9.apache.org,41017,1689159482181 2023-07-12 10:58:04,969 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:04,969 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:04,970 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:04,971 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:04,973 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:04,975 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:04,978 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:04,978 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase9.apache.org,41017,1689159482181 from backup master directory 2023-07-12 10:58:04,982 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase9.apache.org,41017,1689159482181 2023-07-12 10:58:04,983 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 10:58:04,983 WARN [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:04,983 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase9.apache.org,41017,1689159482181 2023-07-12 10:58:04,987 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-12 10:58:04,989 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-12 10:58:05,111 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/hbase.id with ID: 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:05,186 INFO [master/jenkins-hbase9:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:05,211 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:05,316 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2b1f7a51 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:05,368 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@69091d32, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:05,402 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:05,405 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 10:58:05,434 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-12 10:58:05,434 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-12 10:58:05,437 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:05,442 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:05,444 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:05,483 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store-tmp 2023-07-12 10:58:05,524 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:05,524 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 10:58:05,525 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:05,525 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:05,525 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 10:58:05,525 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:05,525 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:05,525 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:05,527 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,41017,1689159482181 2023-07-12 10:58:05,550 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C41017%2C1689159482181, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,41017,1689159482181, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/oldWALs, maxLogs=10 2023-07-12 10:58:05,615 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:05,615 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:05,615 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:05,630 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:05,704 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,41017,1689159482181/jenkins-hbase9.apache.org%2C41017%2C1689159482181.1689159485560 2023-07-12 10:58:05,704 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK], DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK]] 2023-07-12 10:58:05,705 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:05,705 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:05,709 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:05,710 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:05,801 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:05,811 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 10:58:05,852 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 10:58:05,865 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:05,870 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:05,871 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:05,887 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:05,891 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:05,892 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10100537920, jitterRate=-0.05931410193443298}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:05,892 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:05,893 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 10:58:05,915 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 10:58:05,915 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 10:58:05,917 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 10:58:05,919 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 10:58:05,964 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 45 msec 2023-07-12 10:58:05,964 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 10:58:05,989 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 10:58:05,995 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 10:58:06,002 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 10:58:06,008 INFO [master/jenkins-hbase9:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 10:58:06,012 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 10:58:06,015 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:06,016 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 10:58:06,016 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 10:58:06,030 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 10:58:06,034 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:06,034 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:06,034 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:06,034 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:06,035 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:06,035 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase9.apache.org,41017,1689159482181, sessionid=0x1015920fb080000, setting cluster-up flag (Was=false) 2023-07-12 10:58:06,054 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:06,059 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 10:58:06,061 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,41017,1689159482181 2023-07-12 10:58:06,067 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:06,072 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 10:58:06,074 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,41017,1689159482181 2023-07-12 10:58:06,077 WARN [master/jenkins-hbase9:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.hbase-snapshot/.tmp 2023-07-12 10:58:06,120 INFO [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(951): ClusterId : 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:06,120 INFO [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(951): ClusterId : 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:06,120 INFO [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(951): ClusterId : 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:06,128 DEBUG [RS:0;jenkins-hbase9:42501] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:06,128 DEBUG [RS:1;jenkins-hbase9:39623] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:06,128 DEBUG [RS:2;jenkins-hbase9:45597] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:06,135 DEBUG [RS:1;jenkins-hbase9:39623] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:06,135 DEBUG [RS:0;jenkins-hbase9:42501] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:06,135 DEBUG [RS:2;jenkins-hbase9:45597] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:06,135 DEBUG [RS:0;jenkins-hbase9:42501] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:06,135 DEBUG [RS:1;jenkins-hbase9:39623] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:06,135 DEBUG [RS:2;jenkins-hbase9:45597] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:06,140 DEBUG [RS:2;jenkins-hbase9:45597] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:06,142 DEBUG [RS:1;jenkins-hbase9:39623] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:06,147 DEBUG [RS:0;jenkins-hbase9:42501] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:06,148 DEBUG [RS:2;jenkins-hbase9:45597] zookeeper.ReadOnlyZKClient(139): Connect 0x6cb4b525 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:06,148 DEBUG [RS:1;jenkins-hbase9:39623] zookeeper.ReadOnlyZKClient(139): Connect 0x336774d7 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:06,148 DEBUG [RS:0;jenkins-hbase9:42501] zookeeper.ReadOnlyZKClient(139): Connect 0x5375f53a to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:06,161 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 10:58:06,164 DEBUG [RS:0;jenkins-hbase9:42501] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b029546, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:06,164 DEBUG [RS:1;jenkins-hbase9:39623] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7665fcdf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:06,164 DEBUG [RS:0;jenkins-hbase9:42501] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53185d8d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:06,165 DEBUG [RS:1;jenkins-hbase9:39623] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3746b793, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:06,165 DEBUG [RS:2;jenkins-hbase9:45597] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4220d73e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:06,165 DEBUG [RS:2;jenkins-hbase9:45597] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4f963783, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:06,173 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 10:58:06,176 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:06,178 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 10:58:06,178 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 10:58:06,192 DEBUG [RS:2;jenkins-hbase9:45597] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase9:45597 2023-07-12 10:58:06,195 DEBUG [RS:1;jenkins-hbase9:39623] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase9:39623 2023-07-12 10:58:06,200 INFO [RS:1;jenkins-hbase9:39623] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:06,201 INFO [RS:1;jenkins-hbase9:39623] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:06,201 DEBUG [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:06,202 DEBUG [RS:0;jenkins-hbase9:42501] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase9:42501 2023-07-12 10:58:06,202 INFO [RS:0;jenkins-hbase9:42501] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:06,203 INFO [RS:0;jenkins-hbase9:42501] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:06,203 DEBUG [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:06,207 INFO [RS:2;jenkins-hbase9:45597] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:06,207 INFO [RS:2;jenkins-hbase9:45597] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:06,207 DEBUG [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:06,208 INFO [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,41017,1689159482181 with isa=jenkins-hbase9.apache.org/172.31.2.10:39623, startcode=1689159484526 2023-07-12 10:58:06,208 INFO [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,41017,1689159482181 with isa=jenkins-hbase9.apache.org/172.31.2.10:42501, startcode=1689159484335 2023-07-12 10:58:06,208 INFO [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,41017,1689159482181 with isa=jenkins-hbase9.apache.org/172.31.2.10:45597, startcode=1689159484713 2023-07-12 10:58:06,238 DEBUG [RS:0;jenkins-hbase9:42501] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:06,241 DEBUG [RS:2;jenkins-hbase9:45597] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:06,242 DEBUG [RS:1;jenkins-hbase9:39623] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:06,298 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 10:58:06,315 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55089, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:06,315 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:44821, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:06,317 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55121, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:06,328 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:06,339 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:06,341 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:06,348 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 10:58:06,355 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 10:58:06,356 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 10:58:06,356 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 10:58:06,358 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:06,358 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:06,358 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:06,358 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:06,358 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase9:0, corePoolSize=10, maxPoolSize=10 2023-07-12 10:58:06,358 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,358 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:06,359 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,362 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689159516362 2023-07-12 10:58:06,364 DEBUG [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 10:58:06,365 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 10:58:06,364 DEBUG [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 10:58:06,364 DEBUG [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 10:58:06,366 WARN [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 10:58:06,365 WARN [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 10:58:06,366 WARN [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 10:58:06,370 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 10:58:06,372 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 10:58:06,373 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 10:58:06,376 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:06,378 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 10:58:06,379 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 10:58:06,379 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 10:58:06,379 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 10:58:06,380 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,381 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 10:58:06,384 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 10:58:06,385 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 10:58:06,390 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 10:58:06,391 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 10:58:06,396 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159486393,5,FailOnTimeoutGroup] 2023-07-12 10:58:06,396 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159486396,5,FailOnTimeoutGroup] 2023-07-12 10:58:06,396 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,396 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 10:58:06,398 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,399 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,467 INFO [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,41017,1689159482181 with isa=jenkins-hbase9.apache.org/172.31.2.10:45597, startcode=1689159484713 2023-07-12 10:58:06,467 INFO [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,41017,1689159482181 with isa=jenkins-hbase9.apache.org/172.31.2.10:39623, startcode=1689159484526 2023-07-12 10:58:06,468 INFO [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,41017,1689159482181 with isa=jenkins-hbase9.apache.org/172.31.2.10:42501, startcode=1689159484335 2023-07-12 10:58:06,471 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:06,472 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:06,472 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:06,473 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41017] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:06,474 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:06,475 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 10:58:06,481 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41017] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:06,481 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:06,481 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 10:58:06,481 DEBUG [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:06,481 DEBUG [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42757 2023-07-12 10:58:06,482 DEBUG [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35301 2023-07-12 10:58:06,482 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41017] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:06,483 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:06,483 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 10:58:06,484 DEBUG [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:06,484 DEBUG [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42757 2023-07-12 10:58:06,484 DEBUG [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35301 2023-07-12 10:58:06,494 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:06,498 DEBUG [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:06,498 DEBUG [RS:0;jenkins-hbase9:42501] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:06,498 DEBUG [RS:2;jenkins-hbase9:45597] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:06,498 DEBUG [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42757 2023-07-12 10:58:06,499 WARN [RS:2;jenkins-hbase9:45597] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:06,499 WARN [RS:0;jenkins-hbase9:42501] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:06,500 INFO [RS:2;jenkins-hbase9:45597] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:06,500 DEBUG [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35301 2023-07-12 10:58:06,500 INFO [RS:0;jenkins-hbase9:42501] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:06,501 DEBUG [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:06,501 DEBUG [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:06,502 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:06,502 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,45597,1689159484713] 2023-07-12 10:58:06,503 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,42501,1689159484335] 2023-07-12 10:58:06,503 DEBUG [RS:1;jenkins-hbase9:39623] zookeeper.ZKUtil(162): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:06,503 WARN [RS:1;jenkins-hbase9:39623] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:06,503 INFO [RS:1;jenkins-hbase9:39623] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:06,503 DEBUG [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:06,503 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,39623,1689159484526] 2023-07-12 10:58:06,529 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:06,537 DEBUG [RS:1;jenkins-hbase9:39623] zookeeper.ZKUtil(162): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:06,537 DEBUG [RS:2;jenkins-hbase9:45597] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:06,537 DEBUG [RS:1;jenkins-hbase9:39623] zookeeper.ZKUtil(162): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:06,538 DEBUG [RS:0;jenkins-hbase9:42501] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:06,538 DEBUG [RS:2;jenkins-hbase9:45597] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:06,538 DEBUG [RS:1;jenkins-hbase9:39623] zookeeper.ZKUtil(162): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:06,539 DEBUG [RS:0;jenkins-hbase9:42501] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:06,546 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 10:58:06,547 DEBUG [RS:2;jenkins-hbase9:45597] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:06,547 DEBUG [RS:0;jenkins-hbase9:42501] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:06,551 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info 2023-07-12 10:58:06,552 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 10:58:06,553 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:06,554 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 10:58:06,564 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:06,565 DEBUG [RS:0;jenkins-hbase9:42501] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:06,565 DEBUG [RS:2;jenkins-hbase9:45597] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:06,583 INFO [RS:2;jenkins-hbase9:45597] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:06,594 DEBUG [RS:1;jenkins-hbase9:39623] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:06,598 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 10:58:06,600 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:06,597 INFO [RS:1;jenkins-hbase9:39623] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:06,601 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 10:58:06,601 INFO [RS:0;jenkins-hbase9:42501] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:06,604 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table 2023-07-12 10:58:06,605 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 10:58:06,606 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:06,608 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740 2023-07-12 10:58:06,612 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740 2023-07-12 10:58:06,616 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 10:58:06,619 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 10:58:06,625 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:06,626 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11247715360, jitterRate=0.04752512276172638}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 10:58:06,626 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 10:58:06,626 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 10:58:06,626 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 10:58:06,626 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 10:58:06,626 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 10:58:06,626 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 10:58:06,628 INFO [RS:0;jenkins-hbase9:42501] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:06,630 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:06,631 INFO [RS:2;jenkins-hbase9:45597] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:06,630 INFO [RS:1;jenkins-hbase9:39623] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:06,631 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 10:58:06,635 INFO [RS:2;jenkins-hbase9:45597] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:06,635 INFO [RS:0;jenkins-hbase9:42501] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:06,635 INFO [RS:1;jenkins-hbase9:39623] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:06,636 INFO [RS:0;jenkins-hbase9:42501] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,636 INFO [RS:2;jenkins-hbase9:45597] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,636 INFO [RS:1;jenkins-hbase9:39623] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,638 INFO [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:06,639 INFO [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:06,639 INFO [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:06,641 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 10:58:06,641 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 10:58:06,650 INFO [RS:2;jenkins-hbase9:45597] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,650 DEBUG [RS:2;jenkins-hbase9:45597] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,651 DEBUG [RS:2;jenkins-hbase9:45597] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,651 DEBUG [RS:2;jenkins-hbase9:45597] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,651 INFO [RS:0;jenkins-hbase9:42501] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,651 DEBUG [RS:2;jenkins-hbase9:45597] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,651 DEBUG [RS:0;jenkins-hbase9:42501] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,652 DEBUG [RS:0;jenkins-hbase9:42501] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,652 DEBUG [RS:2;jenkins-hbase9:45597] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,652 DEBUG [RS:0;jenkins-hbase9:42501] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,652 DEBUG [RS:2;jenkins-hbase9:45597] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:06,652 DEBUG [RS:0;jenkins-hbase9:42501] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,652 DEBUG [RS:2;jenkins-hbase9:45597] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,652 DEBUG [RS:0;jenkins-hbase9:42501] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,653 DEBUG [RS:2;jenkins-hbase9:45597] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,653 DEBUG [RS:0;jenkins-hbase9:42501] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:06,653 DEBUG [RS:2;jenkins-hbase9:45597] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,653 DEBUG [RS:0;jenkins-hbase9:42501] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,653 DEBUG [RS:2;jenkins-hbase9:45597] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,653 DEBUG [RS:0;jenkins-hbase9:42501] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,653 DEBUG [RS:0;jenkins-hbase9:42501] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,653 DEBUG [RS:0;jenkins-hbase9:42501] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,654 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 10:58:06,655 INFO [RS:1;jenkins-hbase9:39623] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,670 INFO [RS:2;jenkins-hbase9:45597] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,670 DEBUG [RS:1;jenkins-hbase9:39623] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,670 INFO [RS:2;jenkins-hbase9:45597] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,671 INFO [RS:2;jenkins-hbase9:45597] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,671 DEBUG [RS:1;jenkins-hbase9:39623] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,671 DEBUG [RS:1;jenkins-hbase9:39623] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,671 DEBUG [RS:1;jenkins-hbase9:39623] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,671 DEBUG [RS:1;jenkins-hbase9:39623] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,671 DEBUG [RS:1;jenkins-hbase9:39623] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:06,672 DEBUG [RS:1;jenkins-hbase9:39623] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,672 DEBUG [RS:1;jenkins-hbase9:39623] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,672 DEBUG [RS:1;jenkins-hbase9:39623] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,672 DEBUG [RS:1;jenkins-hbase9:39623] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:06,682 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 10:58:06,683 INFO [RS:0;jenkins-hbase9:42501] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,683 INFO [RS:0;jenkins-hbase9:42501] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,683 INFO [RS:0;jenkins-hbase9:42501] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,683 INFO [RS:1;jenkins-hbase9:39623] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,684 INFO [RS:1;jenkins-hbase9:39623] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,684 INFO [RS:1;jenkins-hbase9:39623] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,686 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 10:58:06,701 INFO [RS:0;jenkins-hbase9:42501] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:06,701 INFO [RS:1;jenkins-hbase9:39623] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:06,702 INFO [RS:2;jenkins-hbase9:45597] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:06,705 INFO [RS:0;jenkins-hbase9:42501] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,42501,1689159484335-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,705 INFO [RS:2;jenkins-hbase9:45597] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,45597,1689159484713-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,705 INFO [RS:1;jenkins-hbase9:39623] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,39623,1689159484526-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:06,731 INFO [RS:0;jenkins-hbase9:42501] regionserver.Replication(203): jenkins-hbase9.apache.org,42501,1689159484335 started 2023-07-12 10:58:06,732 INFO [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,42501,1689159484335, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:42501, sessionid=0x1015920fb080001 2023-07-12 10:58:06,732 DEBUG [RS:0;jenkins-hbase9:42501] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:06,732 DEBUG [RS:0;jenkins-hbase9:42501] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:06,732 DEBUG [RS:0;jenkins-hbase9:42501] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,42501,1689159484335' 2023-07-12 10:58:06,732 DEBUG [RS:0;jenkins-hbase9:42501] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:06,733 DEBUG [RS:0;jenkins-hbase9:42501] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:06,734 DEBUG [RS:0;jenkins-hbase9:42501] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:06,734 DEBUG [RS:0;jenkins-hbase9:42501] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:06,734 DEBUG [RS:0;jenkins-hbase9:42501] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:06,734 DEBUG [RS:0;jenkins-hbase9:42501] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,42501,1689159484335' 2023-07-12 10:58:06,734 DEBUG [RS:0;jenkins-hbase9:42501] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:06,734 DEBUG [RS:0;jenkins-hbase9:42501] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:06,735 DEBUG [RS:0;jenkins-hbase9:42501] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:06,735 INFO [RS:0;jenkins-hbase9:42501] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:06,735 INFO [RS:0;jenkins-hbase9:42501] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:06,738 INFO [RS:1;jenkins-hbase9:39623] regionserver.Replication(203): jenkins-hbase9.apache.org,39623,1689159484526 started 2023-07-12 10:58:06,738 INFO [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,39623,1689159484526, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:39623, sessionid=0x1015920fb080002 2023-07-12 10:58:06,738 DEBUG [RS:1;jenkins-hbase9:39623] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:06,738 DEBUG [RS:1;jenkins-hbase9:39623] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:06,739 DEBUG [RS:1;jenkins-hbase9:39623] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,39623,1689159484526' 2023-07-12 10:58:06,740 DEBUG [RS:1;jenkins-hbase9:39623] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:06,740 DEBUG [RS:1;jenkins-hbase9:39623] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:06,741 DEBUG [RS:1;jenkins-hbase9:39623] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:06,741 DEBUG [RS:1;jenkins-hbase9:39623] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:06,741 DEBUG [RS:1;jenkins-hbase9:39623] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:06,741 DEBUG [RS:1;jenkins-hbase9:39623] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,39623,1689159484526' 2023-07-12 10:58:06,741 DEBUG [RS:1;jenkins-hbase9:39623] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:06,742 DEBUG [RS:1;jenkins-hbase9:39623] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:06,743 DEBUG [RS:1;jenkins-hbase9:39623] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:06,743 INFO [RS:1;jenkins-hbase9:39623] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:06,743 INFO [RS:1;jenkins-hbase9:39623] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:06,744 INFO [RS:2;jenkins-hbase9:45597] regionserver.Replication(203): jenkins-hbase9.apache.org,45597,1689159484713 started 2023-07-12 10:58:06,744 INFO [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,45597,1689159484713, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:45597, sessionid=0x1015920fb080003 2023-07-12 10:58:06,744 DEBUG [RS:2;jenkins-hbase9:45597] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:06,744 DEBUG [RS:2;jenkins-hbase9:45597] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:06,744 DEBUG [RS:2;jenkins-hbase9:45597] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,45597,1689159484713' 2023-07-12 10:58:06,744 DEBUG [RS:2;jenkins-hbase9:45597] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:06,745 DEBUG [RS:2;jenkins-hbase9:45597] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:06,746 DEBUG [RS:2;jenkins-hbase9:45597] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:06,746 DEBUG [RS:2;jenkins-hbase9:45597] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:06,746 DEBUG [RS:2;jenkins-hbase9:45597] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:06,746 DEBUG [RS:2;jenkins-hbase9:45597] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,45597,1689159484713' 2023-07-12 10:58:06,746 DEBUG [RS:2;jenkins-hbase9:45597] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:06,747 DEBUG [RS:2;jenkins-hbase9:45597] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:06,748 DEBUG [RS:2;jenkins-hbase9:45597] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:06,748 INFO [RS:2;jenkins-hbase9:45597] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:06,748 INFO [RS:2;jenkins-hbase9:45597] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:06,838 DEBUG [jenkins-hbase9:41017] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 10:58:06,851 INFO [RS:1;jenkins-hbase9:39623] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C39623%2C1689159484526, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,39623,1689159484526, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:06,851 INFO [RS:2;jenkins-hbase9:45597] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C45597%2C1689159484713, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,45597,1689159484713, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:06,858 DEBUG [jenkins-hbase9:41017] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:06,860 INFO [RS:0;jenkins-hbase9:42501] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C42501%2C1689159484335, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,42501,1689159484335, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:06,867 DEBUG [jenkins-hbase9:41017] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:06,868 DEBUG [jenkins-hbase9:41017] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:06,868 DEBUG [jenkins-hbase9:41017] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:06,868 DEBUG [jenkins-hbase9:41017] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:06,876 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,39623,1689159484526, state=OPENING 2023-07-12 10:58:06,892 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 10:58:06,894 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:06,902 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:06,902 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,39623,1689159484526}] 2023-07-12 10:58:06,923 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:06,923 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:06,923 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:06,938 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:06,938 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:06,939 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:06,942 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:06,942 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:06,942 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:06,957 INFO [RS:1;jenkins-hbase9:39623] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,39623,1689159484526/jenkins-hbase9.apache.org%2C39623%2C1689159484526.1689159486863 2023-07-12 10:58:06,957 INFO [RS:0;jenkins-hbase9:42501] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,42501,1689159484335/jenkins-hbase9.apache.org%2C42501%2C1689159484335.1689159486868 2023-07-12 10:58:06,957 INFO [RS:2;jenkins-hbase9:45597] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,45597,1689159484713/jenkins-hbase9.apache.org%2C45597%2C1689159484713.1689159486863 2023-07-12 10:58:06,958 WARN [ReadOnlyZKClient-127.0.0.1:49301@0x2b1f7a51] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 10:58:06,961 DEBUG [RS:1;jenkins-hbase9:39623] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK]] 2023-07-12 10:58:06,969 DEBUG [RS:2;jenkins-hbase9:45597] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK], DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK]] 2023-07-12 10:58:06,970 DEBUG [RS:0;jenkins-hbase9:42501] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK]] 2023-07-12 10:58:07,000 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,41017,1689159482181] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:07,008 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55648, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:07,009 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39623] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.2.10:55648 deadline: 1689159547008, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:07,124 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:07,128 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:07,133 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55660, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:07,147 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 10:58:07,151 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:07,157 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C39623%2C1689159484526.meta, suffix=.meta, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,39623,1689159484526, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:07,185 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:07,186 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:07,185 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:07,194 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,39623,1689159484526/jenkins-hbase9.apache.org%2C39623%2C1689159484526.meta.1689159487159.meta 2023-07-12 10:58:07,195 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK]] 2023-07-12 10:58:07,196 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:07,198 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:07,203 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 10:58:07,206 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 10:58:07,212 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 10:58:07,212 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:07,213 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 10:58:07,213 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 10:58:07,220 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 10:58:07,222 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info 2023-07-12 10:58:07,223 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info 2023-07-12 10:58:07,223 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 10:58:07,224 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:07,225 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 10:58:07,227 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:07,227 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:07,227 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 10:58:07,228 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:07,228 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 10:58:07,230 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table 2023-07-12 10:58:07,230 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table 2023-07-12 10:58:07,231 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 10:58:07,231 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:07,233 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740 2023-07-12 10:58:07,237 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740 2023-07-12 10:58:07,241 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 10:58:07,245 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 10:58:07,247 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10328522560, jitterRate=-0.03808137774467468}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 10:58:07,247 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 10:58:07,263 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689159487121 2023-07-12 10:58:07,284 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 10:58:07,285 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 10:58:07,285 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,39623,1689159484526, state=OPEN 2023-07-12 10:58:07,290 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:58:07,290 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:07,294 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 10:58:07,294 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,39623,1689159484526 in 388 msec 2023-07-12 10:58:07,301 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 10:58:07,301 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 641 msec 2023-07-12 10:58:07,311 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.1190 sec 2023-07-12 10:58:07,311 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689159487311, completionTime=-1 2023-07-12 10:58:07,311 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 10:58:07,311 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 10:58:07,368 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 10:58:07,368 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689159547368 2023-07-12 10:58:07,368 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689159607368 2023-07-12 10:58:07,368 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 56 msec 2023-07-12 10:58:07,386 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,41017,1689159482181-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:07,386 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,41017,1689159482181-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:07,386 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,41017,1689159482181-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:07,388 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase9:41017, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:07,389 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:07,396 DEBUG [master/jenkins-hbase9:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 10:58:07,409 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 10:58:07,411 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:07,421 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 10:58:07,426 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:07,429 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:07,447 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:07,450 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 empty. 2023-07-12 10:58:07,451 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:07,451 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 10:58:07,491 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:07,493 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e5addb24bba6e8be9d4cddc12a45ff25, NAME => 'hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:07,511 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:07,511 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e5addb24bba6e8be9d4cddc12a45ff25, disabling compactions & flushes 2023-07-12 10:58:07,511 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:07,512 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:07,512 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. after waiting 0 ms 2023-07-12 10:58:07,512 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:07,512 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:07,512 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:07,516 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:07,528 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,41017,1689159482181] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:07,531 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,41017,1689159482181] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 10:58:07,534 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:07,536 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:07,538 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159487519"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159487519"}]},"ts":"1689159487519"} 2023-07-12 10:58:07,540 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:07,541 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 empty. 2023-07-12 10:58:07,542 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:07,542 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 10:58:07,588 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:07,593 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:07,597 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:07,599 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0832c48321f808d3b4d6fb68605b1448, NAME => 'hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:07,602 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159487593"}]},"ts":"1689159487593"} 2023-07-12 10:58:07,618 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 10:58:07,630 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:07,630 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:07,630 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:07,630 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:07,631 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:07,635 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, ASSIGN}] 2023-07-12 10:58:07,643 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, ASSIGN 2023-07-12 10:58:07,643 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:07,644 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 0832c48321f808d3b4d6fb68605b1448, disabling compactions & flushes 2023-07-12 10:58:07,644 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:07,644 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:07,644 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. after waiting 0 ms 2023-07-12 10:58:07,644 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:07,644 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:07,644 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:07,645 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,39623,1689159484526; forceNewPlan=false, retain=false 2023-07-12 10:58:07,662 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:07,664 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159487663"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159487663"}]},"ts":"1689159487663"} 2023-07-12 10:58:07,669 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:07,673 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:07,673 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159487673"}]},"ts":"1689159487673"} 2023-07-12 10:58:07,679 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 10:58:07,683 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:07,684 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:07,684 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:07,684 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:07,684 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:07,684 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, ASSIGN}] 2023-07-12 10:58:07,687 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, ASSIGN 2023-07-12 10:58:07,688 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,39623,1689159484526; forceNewPlan=false, retain=false 2023-07-12 10:58:07,689 INFO [jenkins-hbase9:41017] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 10:58:07,691 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:07,691 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:07,692 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159487691"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159487691"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159487691"}]},"ts":"1689159487691"} 2023-07-12 10:58:07,692 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159487691"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159487691"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159487691"}]},"ts":"1689159487691"} 2023-07-12 10:58:07,700 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,39623,1689159484526}] 2023-07-12 10:58:07,702 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,39623,1689159484526}] 2023-07-12 10:58:07,859 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:07,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0832c48321f808d3b4d6fb68605b1448, NAME => 'hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:07,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:07,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. service=MultiRowMutationService 2023-07-12 10:58:07,861 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 10:58:07,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:07,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:07,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:07,862 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:07,864 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:07,869 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m 2023-07-12 10:58:07,869 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m 2023-07-12 10:58:07,870 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0832c48321f808d3b4d6fb68605b1448 columnFamilyName m 2023-07-12 10:58:07,871 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(310): Store=0832c48321f808d3b4d6fb68605b1448/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:07,872 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:07,873 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:07,879 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:07,882 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:07,883 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 0832c48321f808d3b4d6fb68605b1448; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7e3f7fe6, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:07,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:07,886 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448., pid=8, masterSystemTime=1689159487853 2023-07-12 10:58:07,890 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:07,890 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:07,891 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:07,891 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e5addb24bba6e8be9d4cddc12a45ff25, NAME => 'hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:07,891 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:07,891 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:07,891 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:07,892 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:07,892 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:07,893 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159487892"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159487892"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159487892"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159487892"}]},"ts":"1689159487892"} 2023-07-12 10:58:07,894 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:07,897 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info 2023-07-12 10:58:07,897 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info 2023-07-12 10:58:07,898 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e5addb24bba6e8be9d4cddc12a45ff25 columnFamilyName info 2023-07-12 10:58:07,899 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(310): Store=e5addb24bba6e8be9d4cddc12a45ff25/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:07,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:07,902 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-12 10:58:07,902 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,39623,1689159484526 in 197 msec 2023-07-12 10:58:07,903 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:07,907 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-12 10:58:07,907 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, ASSIGN in 218 msec 2023-07-12 10:58:07,911 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:07,911 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:07,911 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159487911"}]},"ts":"1689159487911"} 2023-07-12 10:58:07,914 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:07,915 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 10:58:07,915 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e5addb24bba6e8be9d4cddc12a45ff25; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9485121120, jitterRate=-0.11662925779819489}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:07,915 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:07,916 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25., pid=9, masterSystemTime=1689159487853 2023-07-12 10:58:07,918 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:07,921 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:07,921 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:07,921 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:07,922 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 390 msec 2023-07-12 10:58:07,922 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159487921"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159487921"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159487921"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159487921"}]},"ts":"1689159487921"} 2023-07-12 10:58:07,928 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-12 10:58:07,928 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,39623,1689159484526 in 222 msec 2023-07-12 10:58:07,932 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-12 10:58:07,934 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, ASSIGN in 294 msec 2023-07-12 10:58:07,935 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:07,936 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159487935"}]},"ts":"1689159487935"} 2023-07-12 10:58:07,939 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 10:58:07,944 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:07,946 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 532 msec 2023-07-12 10:58:07,961 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 10:58:07,961 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 10:58:08,024 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 10:58:08,025 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:08,025 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:08,031 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:08,031 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:08,034 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 10:58:08,040 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 10:58:08,045 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 10:58:08,059 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:08,065 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 29 msec 2023-07-12 10:58:08,071 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 10:58:08,082 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:08,088 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 16 msec 2023-07-12 10:58:08,097 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 10:58:08,101 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 10:58:08,101 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.117sec 2023-07-12 10:58:08,104 INFO [master/jenkins-hbase9:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 10:58:08,106 INFO [master/jenkins-hbase9:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 10:58:08,106 INFO [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 10:58:08,108 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,41017,1689159482181-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 10:58:08,108 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,41017,1689159482181-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 10:58:08,117 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 10:58:08,127 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(139): Connect 0x15f52062 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:08,133 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@419a8bd0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:08,150 DEBUG [hconnection-0xddfa172-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:08,164 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55670, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:08,175 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase9.apache.org,41017,1689159482181 2023-07-12 10:58:08,176 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:08,185 DEBUG [Listener at localhost/44831] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 10:58:08,188 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:45870, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 10:58:08,201 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 10:58:08,201 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:08,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(492): Client=jenkins//172.31.2.10 set balanceSwitch=false 2023-07-12 10:58:08,206 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(139): Connect 0x03d0a6d6 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:08,212 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33ae7c0c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:08,212 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:08,215 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:08,215 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015920fb08000a connected 2023-07-12 10:58:08,246 INFO [Listener at localhost/44831] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=421, OpenFileDescriptor=673, MaxFileDescriptor=60000, SystemLoadAverage=328, ProcessCount=172, AvailableMemoryMB=6274 2023-07-12 10:58:08,249 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(132): testClearNotProcessedDeadServer 2023-07-12 10:58:08,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:08,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:08,323 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 10:58:08,336 INFO [Listener at localhost/44831] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:08,337 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:08,337 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:08,337 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:08,337 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:08,337 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:08,337 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:08,341 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:43117 2023-07-12 10:58:08,341 INFO [Listener at localhost/44831] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:08,342 DEBUG [Listener at localhost/44831] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:08,344 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:08,347 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:08,351 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43117 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:08,354 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:431170x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:08,356 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43117-0x1015920fb08000b connected 2023-07-12 10:58:08,356 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:08,357 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 10:58:08,358 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:08,361 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43117 2023-07-12 10:58:08,361 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43117 2023-07-12 10:58:08,364 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43117 2023-07-12 10:58:08,365 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43117 2023-07-12 10:58:08,365 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43117 2023-07-12 10:58:08,367 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:08,367 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:08,367 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:08,368 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:08,368 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:08,368 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:08,368 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:08,369 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 39059 2023-07-12 10:58:08,369 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:08,370 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:08,370 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@357467f6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:08,371 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:08,371 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@104b60bc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:08,494 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:08,495 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:08,495 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:08,496 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:08,497 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:08,498 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3d872323{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-39059-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7958807247498263220/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:08,499 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@bebdc87{HTTP/1.1, (http/1.1)}{0.0.0.0:39059} 2023-07-12 10:58:08,500 INFO [Listener at localhost/44831] server.Server(415): Started @12192ms 2023-07-12 10:58:08,504 INFO [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(951): ClusterId : 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:08,505 DEBUG [RS:3;jenkins-hbase9:43117] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:08,507 DEBUG [RS:3;jenkins-hbase9:43117] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:08,507 DEBUG [RS:3;jenkins-hbase9:43117] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:08,510 DEBUG [RS:3;jenkins-hbase9:43117] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:08,512 DEBUG [RS:3;jenkins-hbase9:43117] zookeeper.ReadOnlyZKClient(139): Connect 0x662cd978 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:08,516 DEBUG [RS:3;jenkins-hbase9:43117] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6cd4b208, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:08,517 DEBUG [RS:3;jenkins-hbase9:43117] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@29d621f6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:08,526 DEBUG [RS:3;jenkins-hbase9:43117] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase9:43117 2023-07-12 10:58:08,526 INFO [RS:3;jenkins-hbase9:43117] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:08,526 INFO [RS:3;jenkins-hbase9:43117] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:08,526 DEBUG [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:08,527 INFO [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,41017,1689159482181 with isa=jenkins-hbase9.apache.org/172.31.2.10:43117, startcode=1689159488336 2023-07-12 10:58:08,527 DEBUG [RS:3;jenkins-hbase9:43117] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:08,531 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:33809, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:08,532 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41017] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:08,532 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:08,532 DEBUG [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:08,532 DEBUG [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42757 2023-07-12 10:58:08,532 DEBUG [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35301 2023-07-12 10:58:08,539 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:08,539 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:08,540 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:08,541 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:08,542 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:08,542 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 10:58:08,542 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,43117,1689159488336] 2023-07-12 10:58:08,543 DEBUG [RS:3;jenkins-hbase9:43117] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:08,543 WARN [RS:3;jenkins-hbase9:43117] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:08,543 INFO [RS:3;jenkins-hbase9:43117] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:08,543 DEBUG [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:08,590 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:08,595 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:08,595 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:08,595 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:08,595 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 10:58:08,597 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:08,598 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:08,599 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:08,600 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:08,600 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:08,600 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:08,601 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:08,602 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:08,603 DEBUG [RS:3;jenkins-hbase9:43117] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:08,603 DEBUG [RS:3;jenkins-hbase9:43117] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:08,604 DEBUG [RS:3;jenkins-hbase9:43117] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:08,604 DEBUG [RS:3;jenkins-hbase9:43117] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:08,606 DEBUG [RS:3;jenkins-hbase9:43117] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:08,606 INFO [RS:3;jenkins-hbase9:43117] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:08,608 INFO [RS:3;jenkins-hbase9:43117] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:08,608 INFO [RS:3;jenkins-hbase9:43117] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:08,609 INFO [RS:3;jenkins-hbase9:43117] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:08,609 INFO [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:08,611 INFO [RS:3;jenkins-hbase9:43117] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:08,611 DEBUG [RS:3;jenkins-hbase9:43117] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:08,611 DEBUG [RS:3;jenkins-hbase9:43117] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:08,611 DEBUG [RS:3;jenkins-hbase9:43117] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:08,611 DEBUG [RS:3;jenkins-hbase9:43117] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:08,611 DEBUG [RS:3;jenkins-hbase9:43117] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:08,611 DEBUG [RS:3;jenkins-hbase9:43117] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:08,611 DEBUG [RS:3;jenkins-hbase9:43117] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:08,611 DEBUG [RS:3;jenkins-hbase9:43117] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:08,612 DEBUG [RS:3;jenkins-hbase9:43117] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:08,612 DEBUG [RS:3;jenkins-hbase9:43117] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:08,613 INFO [RS:3;jenkins-hbase9:43117] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:08,614 INFO [RS:3;jenkins-hbase9:43117] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:08,614 INFO [RS:3;jenkins-hbase9:43117] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:08,629 INFO [RS:3;jenkins-hbase9:43117] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:08,629 INFO [RS:3;jenkins-hbase9:43117] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43117,1689159488336-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:08,640 INFO [RS:3;jenkins-hbase9:43117] regionserver.Replication(203): jenkins-hbase9.apache.org,43117,1689159488336 started 2023-07-12 10:58:08,640 INFO [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,43117,1689159488336, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:43117, sessionid=0x1015920fb08000b 2023-07-12 10:58:08,640 DEBUG [RS:3;jenkins-hbase9:43117] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:08,640 DEBUG [RS:3;jenkins-hbase9:43117] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:08,640 DEBUG [RS:3;jenkins-hbase9:43117] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,43117,1689159488336' 2023-07-12 10:58:08,640 DEBUG [RS:3;jenkins-hbase9:43117] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:08,641 DEBUG [RS:3;jenkins-hbase9:43117] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:08,642 DEBUG [RS:3;jenkins-hbase9:43117] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:08,642 DEBUG [RS:3;jenkins-hbase9:43117] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:08,642 DEBUG [RS:3;jenkins-hbase9:43117] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:08,642 DEBUG [RS:3;jenkins-hbase9:43117] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,43117,1689159488336' 2023-07-12 10:58:08,642 DEBUG [RS:3;jenkins-hbase9:43117] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:08,643 DEBUG [RS:3;jenkins-hbase9:43117] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:08,643 DEBUG [RS:3;jenkins-hbase9:43117] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:08,643 INFO [RS:3;jenkins-hbase9:43117] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:08,643 INFO [RS:3;jenkins-hbase9:43117] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:08,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:08,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:08,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:08,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:08,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:08,656 DEBUG [hconnection-0x324c1766-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:08,659 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55686, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:08,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:08,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:08,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:08,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:08,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:45870 deadline: 1689160688678, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:08,680 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:08,682 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:08,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:08,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:08,684 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:39623, jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:08,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:08,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:08,690 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBasics(260): testClearNotProcessedDeadServer 2023-07-12 10:58:08,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:08,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:08,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup deadServerGroup 2023-07-12 10:58:08,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:08,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:08,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-12 10:58:08,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:08,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:08,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:08,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:08,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:39623] to rsgroup deadServerGroup 2023-07-12 10:58:08,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:08,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:08,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-12 10:58:08,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:08,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(238): Moving server region 0832c48321f808d3b4d6fb68605b1448, which do not belong to RSGroup deadServerGroup 2023-07-12 10:58:08,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:08,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:08,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:08,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:08,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 10:58:08,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:08,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE 2023-07-12 10:58:08,724 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE 2023-07-12 10:58:08,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(238): Moving server region e5addb24bba6e8be9d4cddc12a45ff25, which do not belong to RSGroup deadServerGroup 2023-07-12 10:58:08,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:08,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:08,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:08,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:08,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 10:58:08,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:08,725 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:08,725 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159488725"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159488725"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159488725"}]},"ts":"1689159488725"} 2023-07-12 10:58:08,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE 2023-07-12 10:58:08,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup deadServerGroup 2023-07-12 10:58:08,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:08,727 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE 2023-07-12 10:58:08,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:08,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:08,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:08,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 10:58:08,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:08,728 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:08,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-12 10:58:08,729 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE; CloseRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,39623,1689159484526}] 2023-07-12 10:58:08,729 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-12 10:58:08,729 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159488728"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159488728"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159488728"}]},"ts":"1689159488728"} 2023-07-12 10:58:08,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-12 10:58:08,731 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; CloseRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,39623,1689159484526}] 2023-07-12 10:58:08,734 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,39623,1689159484526, state=CLOSING 2023-07-12 10:58:08,736 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:58:08,736 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=14, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,39623,1689159484526}] 2023-07-12 10:58:08,736 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:08,746 INFO [RS:3;jenkins-hbase9:43117] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C43117%2C1689159488336, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43117,1689159488336, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:08,768 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:08,768 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:08,768 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:08,778 INFO [RS:3;jenkins-hbase9:43117] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43117,1689159488336/jenkins-hbase9.apache.org%2C43117%2C1689159488336.1689159488747 2023-07-12 10:58:08,778 DEBUG [RS:3;jenkins-hbase9:43117] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK], DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK]] 2023-07-12 10:58:08,891 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:08,891 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-12 10:58:08,892 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 10:58:08,892 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 0832c48321f808d3b4d6fb68605b1448, disabling compactions & flushes 2023-07-12 10:58:08,892 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 10:58:08,892 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:08,892 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 10:58:08,892 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:08,892 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 10:58:08,892 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. after waiting 0 ms 2023-07-12 10:58:08,892 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 10:58:08,892 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:08,893 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 0832c48321f808d3b4d6fb68605b1448 1/1 column families, dataSize=1.27 KB heapSize=2.24 KB 2023-07-12 10:58:08,893 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.21 KB heapSize=6.16 KB 2023-07-12 10:58:08,996 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.03 KB at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/info/b49a1b1f3cde4a5f98186fb585abc133 2023-07-12 10:58:08,996 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.27 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/8fa7cb9488f24a899b6cdde7163b9c4c 2023-07-12 10:58:09,051 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/8fa7cb9488f24a899b6cdde7163b9c4c as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/8fa7cb9488f24a899b6cdde7163b9c4c 2023-07-12 10:58:09,073 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/8fa7cb9488f24a899b6cdde7163b9c4c, entries=3, sequenceid=9, filesize=5.1 K 2023-07-12 10:58:09,078 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/table/a69ee6b8f1cc4724a5b721bd5c87f29a 2023-07-12 10:58:09,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.27 KB/1298, heapSize ~2.23 KB/2280, currentSize=0 B/0 for 0832c48321f808d3b4d6fb68605b1448 in 187ms, sequenceid=9, compaction requested=false 2023-07-12 10:58:09,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 10:58:09,100 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-12 10:58:09,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:09,101 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/info/b49a1b1f3cde4a5f98186fb585abc133 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b49a1b1f3cde4a5f98186fb585abc133 2023-07-12 10:58:09,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:09,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:09,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 0832c48321f808d3b4d6fb68605b1448 move to jenkins-hbase9.apache.org,42501,1689159484335 record at close sequenceid=9 2023-07-12 10:58:09,106 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=15, ppid=12, state=RUNNABLE; CloseRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:09,106 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:09,107 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:09,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e5addb24bba6e8be9d4cddc12a45ff25, disabling compactions & flushes 2023-07-12 10:58:09,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:09,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:09,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. after waiting 0 ms 2023-07-12 10:58:09,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:09,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing e5addb24bba6e8be9d4cddc12a45ff25 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-12 10:58:09,114 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b49a1b1f3cde4a5f98186fb585abc133, entries=22, sequenceid=16, filesize=7.3 K 2023-07-12 10:58:09,118 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/table/a69ee6b8f1cc4724a5b721bd5c87f29a as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/a69ee6b8f1cc4724a5b721bd5c87f29a 2023-07-12 10:58:09,135 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/a69ee6b8f1cc4724a5b721bd5c87f29a, entries=4, sequenceid=16, filesize=4.8 K 2023-07-12 10:58:09,141 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.21 KB/3290, heapSize ~5.88 KB/6024, currentSize=0 B/0 for 1588230740 in 248ms, sequenceid=16, compaction requested=false 2023-07-12 10:58:09,142 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 10:58:09,156 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/.tmp/info/7c61e3ca2f7f49229ba8ba16c44c26fc 2023-07-12 10:58:09,164 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/recovered.edits/19.seqid, newMaxSeqId=19, maxSeqId=1 2023-07-12 10:58:09,165 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:09,165 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:09,165 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 10:58:09,165 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase9.apache.org,43117,1689159488336 record at close sequenceid=16 2023-07-12 10:58:09,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/.tmp/info/7c61e3ca2f7f49229ba8ba16c44c26fc as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/7c61e3ca2f7f49229ba8ba16c44c26fc 2023-07-12 10:58:09,169 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-12 10:58:09,170 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-12 10:58:09,174 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=14 2023-07-12 10:58:09,174 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=14, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,39623,1689159484526 in 434 msec 2023-07-12 10:58:09,175 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,43117,1689159488336; forceNewPlan=false, retain=false 2023-07-12 10:58:09,178 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/7c61e3ca2f7f49229ba8ba16c44c26fc, entries=2, sequenceid=6, filesize=4.8 K 2023-07-12 10:58:09,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for e5addb24bba6e8be9d4cddc12a45ff25 in 71ms, sequenceid=6, compaction requested=false 2023-07-12 10:58:09,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 10:58:09,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-12 10:58:09,194 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:09,194 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:09,194 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding e5addb24bba6e8be9d4cddc12a45ff25 move to jenkins-hbase9.apache.org,42501,1689159484335 record at close sequenceid=6 2023-07-12 10:58:09,196 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=16, ppid=13, state=RUNNABLE; CloseRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:09,196 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:09,326 INFO [jenkins-hbase9:41017] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:09,326 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,43117,1689159488336, state=OPENING 2023-07-12 10:58:09,327 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:58:09,327 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=14, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,43117,1689159488336}] 2023-07-12 10:58:09,327 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:09,481 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:09,482 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:09,485 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:42322, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:09,490 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 10:58:09,490 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:09,493 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C43117%2C1689159488336.meta, suffix=.meta, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43117,1689159488336, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:09,511 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:09,514 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:09,514 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:09,518 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43117,1689159488336/jenkins-hbase9.apache.org%2C43117%2C1689159488336.meta.1689159489494.meta 2023-07-12 10:58:09,519 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK], DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK]] 2023-07-12 10:58:09,519 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:09,519 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:09,519 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 10:58:09,519 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 10:58:09,519 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 10:58:09,519 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:09,519 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 10:58:09,519 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 10:58:09,522 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 10:58:09,523 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info 2023-07-12 10:58:09,523 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info 2023-07-12 10:58:09,524 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 10:58:09,534 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b49a1b1f3cde4a5f98186fb585abc133 2023-07-12 10:58:09,535 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:09,535 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 10:58:09,536 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:09,536 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:09,537 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 10:58:09,538 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:09,538 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 10:58:09,539 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table 2023-07-12 10:58:09,539 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table 2023-07-12 10:58:09,540 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 10:58:09,549 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/a69ee6b8f1cc4724a5b721bd5c87f29a 2023-07-12 10:58:09,549 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:09,550 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740 2023-07-12 10:58:09,552 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740 2023-07-12 10:58:09,555 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 10:58:09,558 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 10:58:09,559 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=20; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10261776640, jitterRate=-0.04429757595062256}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 10:58:09,559 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 10:58:09,560 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=18, masterSystemTime=1689159489481 2023-07-12 10:58:09,564 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 10:58:09,564 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 10:58:09,565 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,43117,1689159488336, state=OPEN 2023-07-12 10:58:09,567 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:58:09,567 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:09,568 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=CLOSED 2023-07-12 10:58:09,568 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=CLOSED 2023-07-12 10:58:09,568 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159489568"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159489568"}]},"ts":"1689159489568"} 2023-07-12 10:58:09,568 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159489568"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159489568"}]},"ts":"1689159489568"} 2023-07-12 10:58:09,569 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39623] ipc.CallRunner(144): callId: 40 service: ClientService methodName: Mutate size: 213 connection: 172.31.2.10:55648 deadline: 1689159549569, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=43117 startCode=1689159488336. As of locationSeqNum=16. 2023-07-12 10:58:09,569 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39623] ipc.CallRunner(144): callId: 41 service: ClientService methodName: Mutate size: 217 connection: 172.31.2.10:55648 deadline: 1689159549569, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=43117 startCode=1689159488336. As of locationSeqNum=16. 2023-07-12 10:58:09,570 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=14 2023-07-12 10:58:09,570 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=14, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,43117,1689159488336 in 240 msec 2023-07-12 10:58:09,572 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 843 msec 2023-07-12 10:58:09,671 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:09,672 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:42332, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:09,681 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-07-12 10:58:09,681 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; CloseRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,39623,1689159484526 in 948 msec 2023-07-12 10:58:09,682 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-12 10:58:09,682 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; CloseRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,39623,1689159484526 in 947 msec 2023-07-12 10:58:09,682 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,42501,1689159484335; forceNewPlan=false, retain=false 2023-07-12 10:58:09,683 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,42501,1689159484335; forceNewPlan=false, retain=false 2023-07-12 10:58:09,684 INFO [jenkins-hbase9:41017] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 10:58:09,684 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:09,684 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159489684"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159489684"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159489684"}]},"ts":"1689159489684"} 2023-07-12 10:58:09,685 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:09,685 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159489685"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159489685"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159489685"}]},"ts":"1689159489685"} 2023-07-12 10:58:09,688 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=12, state=RUNNABLE; OpenRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,42501,1689159484335}] 2023-07-12 10:58:09,689 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=13, state=RUNNABLE; OpenRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,42501,1689159484335}] 2023-07-12 10:58:09,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-12 10:58:09,842 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:09,842 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:09,846 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58770, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:09,854 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:09,854 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0832c48321f808d3b4d6fb68605b1448, NAME => 'hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:09,854 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:09,854 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. service=MultiRowMutationService 2023-07-12 10:58:09,854 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 10:58:09,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:09,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:09,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:09,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:09,857 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:09,858 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m 2023-07-12 10:58:09,858 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m 2023-07-12 10:58:09,859 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0832c48321f808d3b4d6fb68605b1448 columnFamilyName m 2023-07-12 10:58:09,867 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/8fa7cb9488f24a899b6cdde7163b9c4c 2023-07-12 10:58:09,868 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(310): Store=0832c48321f808d3b4d6fb68605b1448/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:09,869 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:09,871 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:09,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:09,877 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 0832c48321f808d3b4d6fb68605b1448; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@399850fc, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:09,877 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:09,878 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448., pid=19, masterSystemTime=1689159489841 2023-07-12 10:58:09,882 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:09,883 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:09,883 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:09,883 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e5addb24bba6e8be9d4cddc12a45ff25, NAME => 'hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:09,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:09,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:09,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:09,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:09,885 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:09,885 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159489884"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159489884"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159489884"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159489884"}]},"ts":"1689159489884"} 2023-07-12 10:58:09,886 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:09,889 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info 2023-07-12 10:58:09,890 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info 2023-07-12 10:58:09,890 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e5addb24bba6e8be9d4cddc12a45ff25 columnFamilyName info 2023-07-12 10:58:09,892 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=12 2023-07-12 10:58:09,893 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=12, state=SUCCESS; OpenRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,42501,1689159484335 in 201 msec 2023-07-12 10:58:09,897 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE in 1.1710 sec 2023-07-12 10:58:09,904 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/7c61e3ca2f7f49229ba8ba16c44c26fc 2023-07-12 10:58:09,904 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(310): Store=e5addb24bba6e8be9d4cddc12a45ff25/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:09,906 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:09,907 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:09,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:09,913 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e5addb24bba6e8be9d4cddc12a45ff25; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11441969120, jitterRate=0.0656164139509201}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:09,914 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:09,915 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25., pid=20, masterSystemTime=1689159489841 2023-07-12 10:58:09,917 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:09,917 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:09,918 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:09,919 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159489918"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159489918"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159489918"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159489918"}]},"ts":"1689159489918"} 2023-07-12 10:58:09,925 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=13 2023-07-12 10:58:09,925 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=13, state=SUCCESS; OpenRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,42501,1689159484335 in 233 msec 2023-07-12 10:58:09,927 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE in 1.2000 sec 2023-07-12 10:58:10,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,39623,1689159484526] are moved back to default 2023-07-12 10:58:10,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(438): Move servers done: default => deadServerGroup 2023-07-12 10:58:10,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:10,732 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39623] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.2.10:55686 deadline: 1689159550732, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=42501 startCode=1689159484335. As of locationSeqNum=9. 2023-07-12 10:58:10,836 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39623] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.2.10:55686 deadline: 1689159550836, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=43117 startCode=1689159488336. As of locationSeqNum=16. 2023-07-12 10:58:10,938 DEBUG [hconnection-0x324c1766-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:10,942 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:42340, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:10,958 DEBUG [hconnection-0x324c1766-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:10,962 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58776, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:10,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:10,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:10,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-12 10:58:10,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:10,982 DEBUG [Listener at localhost/44831] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:10,989 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55690, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:10,989 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39623] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,39623,1689159484526' ***** 2023-07-12 10:58:10,990 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39623] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0xddfa172 2023-07-12 10:58:10,990 INFO [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:10,991 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:10,995 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:10,998 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:11,003 INFO [RS:1;jenkins-hbase9:39623] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@40dc144b{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:11,006 INFO [RS:1;jenkins-hbase9:39623] server.AbstractConnector(383): Stopped ServerConnector@6786282b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:11,007 INFO [RS:1;jenkins-hbase9:39623] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:11,007 INFO [RS:1;jenkins-hbase9:39623] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6f2ec142{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:11,008 INFO [RS:1;jenkins-hbase9:39623] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4da0451f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:11,010 INFO [RS:1;jenkins-hbase9:39623] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:11,010 INFO [RS:1;jenkins-hbase9:39623] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:11,010 INFO [RS:1;jenkins-hbase9:39623] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:11,010 INFO [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:11,010 DEBUG [RS:1;jenkins-hbase9:39623] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x336774d7 to 127.0.0.1:49301 2023-07-12 10:58:11,010 DEBUG [RS:1;jenkins-hbase9:39623] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:11,010 INFO [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,39623,1689159484526; all regions closed. 2023-07-12 10:58:11,025 DEBUG [RS:1;jenkins-hbase9:39623] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:11,025 INFO [RS:1;jenkins-hbase9:39623] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C39623%2C1689159484526.meta:.meta(num 1689159487159) 2023-07-12 10:58:11,034 DEBUG [RS:1;jenkins-hbase9:39623] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:11,034 INFO [RS:1;jenkins-hbase9:39623] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C39623%2C1689159484526:(num 1689159486863) 2023-07-12 10:58:11,034 DEBUG [RS:1;jenkins-hbase9:39623] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:11,034 INFO [RS:1;jenkins-hbase9:39623] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:11,035 INFO [RS:1;jenkins-hbase9:39623] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:11,035 INFO [RS:1;jenkins-hbase9:39623] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:11,035 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:11,035 INFO [RS:1;jenkins-hbase9:39623] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:11,036 INFO [RS:1;jenkins-hbase9:39623] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:11,037 INFO [RS:1;jenkins-hbase9:39623] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:39623 2023-07-12 10:58:11,051 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:11,052 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:11,052 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:11,051 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:11,052 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:11,052 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:11,052 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:11,052 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 2023-07-12 10:58:11,052 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:11,053 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,39623,1689159484526] 2023-07-12 10:58:11,054 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,39623,1689159484526; numProcessing=1 2023-07-12 10:58:11,054 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:11,056 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,39623,1689159484526 already deleted, retry=false 2023-07-12 10:58:11,056 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:11,056 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase9.apache.org,39623,1689159484526 on jenkins-hbase9.apache.org,41017,1689159482181 2023-07-12 10:58:11,056 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:11,056 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:11,058 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:11,058 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:11,058 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:11,058 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 znode expired, triggering replicatorRemoved event 2023-07-12 10:58:11,058 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:11,058 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 znode expired, triggering replicatorRemoved event 2023-07-12 10:58:11,061 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:11,061 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase9.apache.org,39623,1689159484526 znode expired, triggering replicatorRemoved event 2023-07-12 10:58:11,062 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:11,062 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:11,062 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:11,063 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:11,063 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:11,063 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:11,063 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:11,063 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:11,064 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:11,067 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase9.apache.org,39623,1689159484526, splitWal=true, meta=false 2023-07-12 10:58:11,068 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=21 for jenkins-hbase9.apache.org,39623,1689159484526 (carryingMeta=false) jenkins-hbase9.apache.org,39623,1689159484526/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@4336679c[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-12 10:58:11,068 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:11,071 WARN [RS-EventLoopGroup-5-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase9.apache.org/172.31.2.10:39623 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:39623 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:11,073 DEBUG [RS-EventLoopGroup-5-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase9.apache.org/172.31.2.10:39623 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:39623 2023-07-12 10:58:11,074 INFO [PEWorker-5] procedure.ServerCrashProcedure(161): Start pid=21, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,39623,1689159484526, splitWal=true, meta=false 2023-07-12 10:58:11,076 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase9.apache.org,39623,1689159484526 had 0 regions 2023-07-12 10:58:11,078 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=21, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,39623,1689159484526, splitWal=true, meta=false, isMeta: false 2023-07-12 10:58:11,080 DEBUG [PEWorker-5] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,39623,1689159484526-splitting 2023-07-12 10:58:11,081 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,39623,1689159484526-splitting dir is empty, no logs to split. 2023-07-12 10:58:11,081 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase9.apache.org,39623,1689159484526 WAL count=0, meta=false 2023-07-12 10:58:11,085 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,39623,1689159484526-splitting dir is empty, no logs to split. 2023-07-12 10:58:11,085 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase9.apache.org,39623,1689159484526 WAL count=0, meta=false 2023-07-12 10:58:11,086 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase9.apache.org,39623,1689159484526 WAL splitting is done? wals=0, meta=false 2023-07-12 10:58:11,090 INFO [PEWorker-5] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase9.apache.org,39623,1689159484526 failed, ignore...File hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,39623,1689159484526-splitting does not exist. 2023-07-12 10:58:11,092 INFO [PEWorker-5] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase9.apache.org,39623,1689159484526 after splitting done 2023-07-12 10:58:11,092 DEBUG [PEWorker-5] master.DeadServer(114): Removed jenkins-hbase9.apache.org,39623,1689159484526 from processing; numProcessing=0 2023-07-12 10:58:11,094 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,39623,1689159484526, splitWal=true, meta=false in 34 msec 2023-07-12 10:58:11,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-12 10:58:11,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:11,103 DEBUG [hconnection-0x60d365ca-shared-pool-6] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase9.apache.org/172.31.2.10:39623 this server is in the failed servers list 2023-07-12 10:58:11,184 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:11,185 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58792, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:11,188 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,189 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:11,190 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-12 10:58:11,190 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:11,197 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 10:58:11,212 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:11,213 INFO [RS:1;jenkins-hbase9:39623] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,39623,1689159484526; zookeeper connection closed. 2023-07-12 10:58:11,213 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39623-0x1015920fb080002, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:11,213 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@486618ab] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@486618ab 2023-07-12 10:58:11,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:11,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:11,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:11,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:11,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:11,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:11,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-12 10:58:11,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 10:58:11,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:11,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:11,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:11,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:11,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:39623] to rsgroup default 2023-07-12 10:58:11,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(258): Dropping jenkins-hbase9.apache.org:39623 during move-to-default rsgroup because not online 2023-07-12 10:58:11,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-12 10:58:11,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:11,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group deadServerGroup, current retry=0 2023-07-12 10:58:11,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(261): All regions from [] are moved back to deadServerGroup 2023-07-12 10:58:11,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(438): Move servers done: deadServerGroup => default 2023-07-12 10:58:11,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:11,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup deadServerGroup 2023-07-12 10:58:11,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:11,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:11,258 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 10:58:11,271 INFO [Listener at localhost/44831] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:11,272 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:11,272 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:11,272 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:11,272 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:11,272 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:11,272 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:11,273 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:43635 2023-07-12 10:58:11,274 INFO [Listener at localhost/44831] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:11,275 DEBUG [Listener at localhost/44831] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:11,276 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:11,277 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:11,278 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43635 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:11,282 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:436350x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:11,283 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(162): regionserver:436350x0, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:11,284 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43635-0x1015920fb08000d connected 2023-07-12 10:58:11,285 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(162): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 10:58:11,286 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:11,287 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43635 2023-07-12 10:58:11,287 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43635 2023-07-12 10:58:11,287 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43635 2023-07-12 10:58:11,288 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43635 2023-07-12 10:58:11,288 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43635 2023-07-12 10:58:11,290 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:11,290 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:11,290 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:11,291 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:11,291 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:11,291 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:11,291 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:11,291 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 40321 2023-07-12 10:58:11,292 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:11,300 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:11,300 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@f0c18f5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:11,300 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:11,301 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3492ad1a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:11,423 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:11,423 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:11,424 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:11,424 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:58:11,426 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:11,427 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@30087dd3{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-40321-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8156380306828845885/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:11,431 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@6dad6690{HTTP/1.1, (http/1.1)}{0.0.0.0:40321} 2023-07-12 10:58:11,431 INFO [Listener at localhost/44831] server.Server(415): Started @15123ms 2023-07-12 10:58:11,436 INFO [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(951): ClusterId : 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:11,439 DEBUG [RS:4;jenkins-hbase9:43635] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:11,441 DEBUG [RS:4;jenkins-hbase9:43635] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:11,441 DEBUG [RS:4;jenkins-hbase9:43635] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:11,443 DEBUG [RS:4;jenkins-hbase9:43635] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:11,445 DEBUG [RS:4;jenkins-hbase9:43635] zookeeper.ReadOnlyZKClient(139): Connect 0x4fc9ca10 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:11,451 DEBUG [RS:4;jenkins-hbase9:43635] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@410e2d99, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:11,451 DEBUG [RS:4;jenkins-hbase9:43635] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4531a160, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:11,460 DEBUG [RS:4;jenkins-hbase9:43635] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase9:43635 2023-07-12 10:58:11,460 INFO [RS:4;jenkins-hbase9:43635] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:11,461 INFO [RS:4;jenkins-hbase9:43635] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:11,461 DEBUG [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:11,462 INFO [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,41017,1689159482181 with isa=jenkins-hbase9.apache.org/172.31.2.10:43635, startcode=1689159491271 2023-07-12 10:58:11,462 DEBUG [RS:4;jenkins-hbase9:43635] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:11,465 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:52189, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:11,466 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41017] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:11,466 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:11,466 DEBUG [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:11,466 DEBUG [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42757 2023-07-12 10:58:11,466 DEBUG [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35301 2023-07-12 10:58:11,468 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:11,468 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:11,468 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:11,470 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:11,470 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:11,470 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:11,470 DEBUG [RS:4;jenkins-hbase9:43635] zookeeper.ZKUtil(162): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:11,470 WARN [RS:4;jenkins-hbase9:43635] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:11,470 INFO [RS:4;jenkins-hbase9:43635] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:11,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:11,471 DEBUG [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:11,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:11,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:11,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:11,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:11,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:11,473 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:11,474 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:11,475 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,43635,1689159491271] 2023-07-12 10:58:11,476 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,477 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 10:58:11,486 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,41017,1689159482181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 10:58:11,486 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:11,489 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:11,490 DEBUG [RS:4;jenkins-hbase9:43635] zookeeper.ZKUtil(162): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:11,491 DEBUG [RS:4;jenkins-hbase9:43635] zookeeper.ZKUtil(162): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:11,492 DEBUG [RS:4;jenkins-hbase9:43635] zookeeper.ZKUtil(162): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:11,493 DEBUG [RS:4;jenkins-hbase9:43635] zookeeper.ZKUtil(162): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:11,495 DEBUG [RS:4;jenkins-hbase9:43635] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:11,495 INFO [RS:4;jenkins-hbase9:43635] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:11,496 INFO [RS:4;jenkins-hbase9:43635] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:11,497 INFO [RS:4;jenkins-hbase9:43635] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:11,497 INFO [RS:4;jenkins-hbase9:43635] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:11,497 INFO [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:11,499 INFO [RS:4;jenkins-hbase9:43635] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:11,499 DEBUG [RS:4;jenkins-hbase9:43635] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:11,499 DEBUG [RS:4;jenkins-hbase9:43635] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:11,499 DEBUG [RS:4;jenkins-hbase9:43635] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:11,499 DEBUG [RS:4;jenkins-hbase9:43635] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:11,499 DEBUG [RS:4;jenkins-hbase9:43635] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:11,500 DEBUG [RS:4;jenkins-hbase9:43635] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:11,500 DEBUG [RS:4;jenkins-hbase9:43635] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:11,500 DEBUG [RS:4;jenkins-hbase9:43635] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:11,500 DEBUG [RS:4;jenkins-hbase9:43635] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:11,500 DEBUG [RS:4;jenkins-hbase9:43635] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:11,504 INFO [RS:4;jenkins-hbase9:43635] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:11,505 INFO [RS:4;jenkins-hbase9:43635] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:11,505 INFO [RS:4;jenkins-hbase9:43635] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:11,516 INFO [RS:4;jenkins-hbase9:43635] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:11,516 INFO [RS:4;jenkins-hbase9:43635] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43635,1689159491271-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:11,527 INFO [RS:4;jenkins-hbase9:43635] regionserver.Replication(203): jenkins-hbase9.apache.org,43635,1689159491271 started 2023-07-12 10:58:11,527 INFO [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,43635,1689159491271, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:43635, sessionid=0x1015920fb08000d 2023-07-12 10:58:11,527 DEBUG [RS:4;jenkins-hbase9:43635] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:11,527 DEBUG [RS:4;jenkins-hbase9:43635] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:11,527 DEBUG [RS:4;jenkins-hbase9:43635] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,43635,1689159491271' 2023-07-12 10:58:11,528 DEBUG [RS:4;jenkins-hbase9:43635] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:11,528 DEBUG [RS:4;jenkins-hbase9:43635] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:11,528 DEBUG [RS:4;jenkins-hbase9:43635] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:11,529 DEBUG [RS:4;jenkins-hbase9:43635] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:11,529 DEBUG [RS:4;jenkins-hbase9:43635] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:11,529 DEBUG [RS:4;jenkins-hbase9:43635] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,43635,1689159491271' 2023-07-12 10:58:11,529 DEBUG [RS:4;jenkins-hbase9:43635] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:11,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:11,529 DEBUG [RS:4;jenkins-hbase9:43635] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:11,530 DEBUG [RS:4;jenkins-hbase9:43635] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:11,530 INFO [RS:4;jenkins-hbase9:43635] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:11,530 INFO [RS:4;jenkins-hbase9:43635] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:11,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:11,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:11,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:11,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:11,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:11,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 69 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:45870 deadline: 1689160691547, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:11,548 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:11,551 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:11,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,552 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:11,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:11,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:11,590 INFO [Listener at localhost/44831] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=481 (was 421) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_128734660_17 at /127.0.0.1:37280 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1369062891-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:42757 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:4;jenkins-hbase9:43635 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1369062891-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1616880846_17 at /127.0.0.1:47732 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741845_1021] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-427095611_17 at /127.0.0.1:37478 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x324c1766-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1281706449_17 at /127.0.0.1:33196 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_128734660_17 at /127.0.0.1:37458 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp696337587-782 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x4fc9ca10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/356426229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1369062891-635-acceptor-0@6405d14b-ServerConnector@bebdc87{HTTP/1.1, (http/1.1)}{0.0.0.0:39059} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp696337587-784 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-427095611_17 at /127.0.0.1:37466 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741845_1021, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp696337587-779 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1771827885.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp696337587-781 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1369062891-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x4fc9ca10-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1369062891-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741845_1021, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1369062891-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1369062891-634 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1771827885.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-metaLookup-shared--pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-13b46201-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1616880846_17 at /127.0.0.1:33270 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741845_1021] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x662cd978-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS:3;jenkins-hbase9:43117 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1616880846_17 at /127.0.0.1:37456 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741845_1021] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1616880846_17 at /127.0.0.1:47670 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp696337587-785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-5f307a4d-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x662cd978 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/356426229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RSProcedureDispatcher-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-427095611_17 at /127.0.0.1:47746 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1281706449_17 at /127.0.0.1:33236 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp696337587-780-acceptor-0@50a2896-ServerConnector@6dad6690{HTTP/1.1, (http/1.1)}{0.0.0.0:40321} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase9:43635Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-metaLookup-shared--pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5-prefix:jenkins-hbase9.apache.org,43117,1689159488336 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp696337587-783 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1616880846_17 at /127.0.0.1:33204 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1234087232) connection to localhost/127.0.0.1:42757 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:3;jenkins-hbase9:43117-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5-prefix:jenkins-hbase9.apache.org,43117,1689159488336.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x4fc9ca10-SendThread(127.0.0.1:49301) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-427095611_17 at /127.0.0.1:47524 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741845_1021, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1616880846_17 at /127.0.0.1:37416 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp696337587-786 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x662cd978-SendThread(127.0.0.1:49301) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (1234087232) connection to localhost/127.0.0.1:42757 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:4;jenkins-hbase9:43635-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase9:43117Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x324c1766-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1369062891-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=758 (was 673) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=341 (was 328) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=6169 (was 6274) 2023-07-12 10:58:11,616 INFO [Listener at localhost/44831] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=481, OpenFileDescriptor=758, MaxFileDescriptor=60000, SystemLoadAverage=341, ProcessCount=172, AvailableMemoryMB=6161 2023-07-12 10:58:11,616 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(132): testDefaultNamespaceCreateAndAssign 2023-07-12 10:58:11,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,634 INFO [RS:4;jenkins-hbase9:43635] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C43635%2C1689159491271, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43635,1689159491271, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:11,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:11,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:11,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:11,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:11,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:11,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:11,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:11,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:11,654 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:11,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:11,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:11,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:11,672 DEBUG [RS-EventLoopGroup-8-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:11,674 DEBUG [RS-EventLoopGroup-8-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:11,674 DEBUG [RS-EventLoopGroup-8-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:11,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:11,683 INFO [RS:4;jenkins-hbase9:43635] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43635,1689159491271/jenkins-hbase9.apache.org%2C43635%2C1689159491271.1689159491635 2023-07-12 10:58:11,685 DEBUG [RS:4;jenkins-hbase9:43635] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK]] 2023-07-12 10:58:11,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:11,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:11,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 97 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:45870 deadline: 1689160691689, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:11,690 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:11,692 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:11,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,694 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:11,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:11,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:11,696 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBasics(180): testDefaultNamespaceCreateAndAssign 2023-07-12 10:58:11,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$16(3053): Client=jenkins//172.31.2.10 modify {NAME => 'default', hbase.rsgroup.name => 'default'} 2023-07-12 10:58:11,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=default 2023-07-12 10:58:11,725 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 10:58:11,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 10:58:11,731 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; ModifyNamespaceProcedure, namespace=default in 22 msec 2023-07-12 10:58:11,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 10:58:11,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:11,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:11,847 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=23, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:11,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndAssign" procId is: 23 2023-07-12 10:58:11,853 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,854 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:11,855 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:11,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 10:58:11,861 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=23, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:11,863 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908 2023-07-12 10:58:11,864 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908 empty. 2023-07-12 10:58:11,864 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908 2023-07-12 10:58:11,864 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-12 10:58:11,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 10:58:12,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 10:58:12,296 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:12,298 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => 28d0a97f825c9bf8dfdef237b1321908, NAME => 'Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:12,314 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:12,314 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing 28d0a97f825c9bf8dfdef237b1321908, disabling compactions & flushes 2023-07-12 10:58:12,314 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908. 2023-07-12 10:58:12,315 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908. 2023-07-12 10:58:12,315 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908. after waiting 0 ms 2023-07-12 10:58:12,315 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908. 2023-07-12 10:58:12,315 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908. 2023-07-12 10:58:12,315 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for 28d0a97f825c9bf8dfdef237b1321908: 2023-07-12 10:58:12,319 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=23, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:12,320 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159492320"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159492320"}]},"ts":"1689159492320"} 2023-07-12 10:58:12,322 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:12,323 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=23, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:12,323 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159492323"}]},"ts":"1689159492323"} 2023-07-12 10:58:12,325 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-12 10:58:12,328 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:12,328 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:12,328 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:12,328 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:12,328 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 10:58:12,328 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:12,329 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=28d0a97f825c9bf8dfdef237b1321908, ASSIGN}] 2023-07-12 10:58:12,330 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=28d0a97f825c9bf8dfdef237b1321908, ASSIGN 2023-07-12 10:58:12,331 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=24, ppid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=28d0a97f825c9bf8dfdef237b1321908, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43117,1689159488336; forceNewPlan=false, retain=false 2023-07-12 10:58:12,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 10:58:12,482 INFO [jenkins-hbase9:41017] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:12,483 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=28d0a97f825c9bf8dfdef237b1321908, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:12,483 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159492483"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159492483"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159492483"}]},"ts":"1689159492483"} 2023-07-12 10:58:12,485 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=24, state=RUNNABLE; OpenRegionProcedure 28d0a97f825c9bf8dfdef237b1321908, server=jenkins-hbase9.apache.org,43117,1689159488336}] 2023-07-12 10:58:12,644 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908. 2023-07-12 10:58:12,644 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 28d0a97f825c9bf8dfdef237b1321908, NAME => 'Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:12,644 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign 28d0a97f825c9bf8dfdef237b1321908 2023-07-12 10:58:12,644 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:12,645 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 28d0a97f825c9bf8dfdef237b1321908 2023-07-12 10:58:12,645 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 28d0a97f825c9bf8dfdef237b1321908 2023-07-12 10:58:12,648 INFO [StoreOpener-28d0a97f825c9bf8dfdef237b1321908-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 28d0a97f825c9bf8dfdef237b1321908 2023-07-12 10:58:12,653 DEBUG [StoreOpener-28d0a97f825c9bf8dfdef237b1321908-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908/f 2023-07-12 10:58:12,654 DEBUG [StoreOpener-28d0a97f825c9bf8dfdef237b1321908-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908/f 2023-07-12 10:58:12,654 INFO [StoreOpener-28d0a97f825c9bf8dfdef237b1321908-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 28d0a97f825c9bf8dfdef237b1321908 columnFamilyName f 2023-07-12 10:58:12,656 INFO [StoreOpener-28d0a97f825c9bf8dfdef237b1321908-1] regionserver.HStore(310): Store=28d0a97f825c9bf8dfdef237b1321908/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:12,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908 2023-07-12 10:58:12,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908 2023-07-12 10:58:12,667 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 28d0a97f825c9bf8dfdef237b1321908 2023-07-12 10:58:12,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:12,671 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 28d0a97f825c9bf8dfdef237b1321908; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10216578400, jitterRate=-0.04850699007511139}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:12,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 28d0a97f825c9bf8dfdef237b1321908: 2023-07-12 10:58:12,674 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908., pid=25, masterSystemTime=1689159492638 2023-07-12 10:58:12,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908. 2023-07-12 10:58:12,676 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908. 2023-07-12 10:58:12,677 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=28d0a97f825c9bf8dfdef237b1321908, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:12,678 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159492677"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159492677"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159492677"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159492677"}]},"ts":"1689159492677"} 2023-07-12 10:58:12,683 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=24 2023-07-12 10:58:12,683 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=24, state=SUCCESS; OpenRegionProcedure 28d0a97f825c9bf8dfdef237b1321908, server=jenkins-hbase9.apache.org,43117,1689159488336 in 195 msec 2023-07-12 10:58:12,686 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=23 2023-07-12 10:58:12,687 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=28d0a97f825c9bf8dfdef237b1321908, ASSIGN in 354 msec 2023-07-12 10:58:12,688 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=23, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:12,688 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159492688"}]},"ts":"1689159492688"} 2023-07-12 10:58:12,690 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-12 10:58:12,694 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=23, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:12,699 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign in 851 msec 2023-07-12 10:58:12,770 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 10:58:12,845 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 10:58:12,846 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-12 10:58:12,846 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:12,847 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-12 10:58:12,847 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 10:58:12,847 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-12 10:58:12,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 10:58:12,962 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndAssign, procId: 23 completed 2023-07-12 10:58:12,963 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:12,969 DEBUG [Listener at localhost/44831] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:12,971 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:35554, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:12,975 DEBUG [Listener at localhost/44831] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:12,976 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:59324, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:12,977 DEBUG [Listener at localhost/44831] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:12,978 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:36430, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:12,979 DEBUG [Listener at localhost/44831] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:12,980 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:40114, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:12,984 INFO [Listener at localhost/44831] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndAssign 2023-07-12 10:58:12,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testCreateAndAssign 2023-07-12 10:58:12,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:13,002 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159493002"}]},"ts":"1689159493002"} 2023-07-12 10:58:13,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=26 2023-07-12 10:58:13,004 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-12 10:58:13,006 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testCreateAndAssign to state=DISABLING 2023-07-12 10:58:13,007 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=28d0a97f825c9bf8dfdef237b1321908, UNASSIGN}] 2023-07-12 10:58:13,009 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, ppid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=28d0a97f825c9bf8dfdef237b1321908, UNASSIGN 2023-07-12 10:58:13,010 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=28d0a97f825c9bf8dfdef237b1321908, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:13,010 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159493010"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159493010"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159493010"}]},"ts":"1689159493010"} 2023-07-12 10:58:13,012 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=27, state=RUNNABLE; CloseRegionProcedure 28d0a97f825c9bf8dfdef237b1321908, server=jenkins-hbase9.apache.org,43117,1689159488336}] 2023-07-12 10:58:13,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=26 2023-07-12 10:58:13,166 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 28d0a97f825c9bf8dfdef237b1321908 2023-07-12 10:58:13,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 28d0a97f825c9bf8dfdef237b1321908, disabling compactions & flushes 2023-07-12 10:58:13,167 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908. 2023-07-12 10:58:13,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908. 2023-07-12 10:58:13,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908. after waiting 0 ms 2023-07-12 10:58:13,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908. 2023-07-12 10:58:13,174 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:13,176 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908. 2023-07-12 10:58:13,176 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 28d0a97f825c9bf8dfdef237b1321908: 2023-07-12 10:58:13,178 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 28d0a97f825c9bf8dfdef237b1321908 2023-07-12 10:58:13,179 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=28d0a97f825c9bf8dfdef237b1321908, regionState=CLOSED 2023-07-12 10:58:13,179 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159493179"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159493179"}]},"ts":"1689159493179"} 2023-07-12 10:58:13,186 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=27 2023-07-12 10:58:13,186 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=27, state=SUCCESS; CloseRegionProcedure 28d0a97f825c9bf8dfdef237b1321908, server=jenkins-hbase9.apache.org,43117,1689159488336 in 169 msec 2023-07-12 10:58:13,188 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=26 2023-07-12 10:58:13,188 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=28d0a97f825c9bf8dfdef237b1321908, UNASSIGN in 179 msec 2023-07-12 10:58:13,189 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159493189"}]},"ts":"1689159493189"} 2023-07-12 10:58:13,191 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-12 10:58:13,192 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testCreateAndAssign to state=DISABLED 2023-07-12 10:58:13,195 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign in 201 msec 2023-07-12 10:58:13,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=26 2023-07-12 10:58:13,306 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndAssign, procId: 26 completed 2023-07-12 10:58:13,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete Group_testCreateAndAssign 2023-07-12 10:58:13,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:13,322 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=29, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:13,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndAssign' from rsgroup 'default' 2023-07-12 10:58:13,324 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=29, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:13,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:13,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:13,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:13,331 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908 2023-07-12 10:58:13,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=29 2023-07-12 10:58:13,335 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908/f, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908/recovered.edits] 2023-07-12 10:58:13,342 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908/recovered.edits/4.seqid 2023-07-12 10:58:13,343 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndAssign/28d0a97f825c9bf8dfdef237b1321908 2023-07-12 10:58:13,343 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-12 10:58:13,346 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=29, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:13,369 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndAssign from hbase:meta 2023-07-12 10:58:13,412 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndAssign' descriptor. 2023-07-12 10:58:13,414 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=29, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:13,414 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndAssign' from region states. 2023-07-12 10:58:13,414 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159493414"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:13,416 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 10:58:13,416 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 28d0a97f825c9bf8dfdef237b1321908, NAME => 'Group_testCreateAndAssign,,1689159491840.28d0a97f825c9bf8dfdef237b1321908.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 10:58:13,416 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndAssign' as deleted. 2023-07-12 10:58:13,417 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159493416"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:13,418 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndAssign state from META 2023-07-12 10:58:13,422 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=29, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:13,423 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign in 110 msec 2023-07-12 10:58:13,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=29 2023-07-12 10:58:13,433 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndAssign, procId: 29 completed 2023-07-12 10:58:13,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:13,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:13,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:13,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:13,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:13,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:13,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:13,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:13,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:13,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:13,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:13,450 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:13,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:13,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:13,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:13,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:13,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:13,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:13,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:13,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:13,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:13,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 163 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160693463, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:13,463 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:13,466 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:13,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:13,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:13,467 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:13,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:13,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:13,488 INFO [Listener at localhost/44831] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=500 (was 481) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5-prefix:jenkins-hbase9.apache.org,43635,1689159491271 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741846_1022, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1616880846_17 at /127.0.0.1:37478 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_744347008_17 at /127.0.0.1:33738 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741846_1022] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:42757 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_744347008_17 at /127.0.0.1:33750 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741846_1022] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741846_1022, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741846_1022, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_744347008_17 at /127.0.0.1:39526 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741846_1022] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=778 (was 758) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=341 (was 341), ProcessCount=172 (was 172), AvailableMemoryMB=6107 (was 6161) 2023-07-12 10:58:13,504 INFO [Listener at localhost/44831] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=500, OpenFileDescriptor=778, MaxFileDescriptor=60000, SystemLoadAverage=341, ProcessCount=172, AvailableMemoryMB=6106 2023-07-12 10:58:13,504 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(132): testCreateMultiRegion 2023-07-12 10:58:13,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:13,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:13,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:13,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:13,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:13,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:13,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:13,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:13,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:13,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:13,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:13,523 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:13,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:13,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:13,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:13,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:13,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:13,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:13,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:13,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:13,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:13,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 191 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160693538, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:13,539 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:13,540 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:13,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:13,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:13,541 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:13,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:13,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:13,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:13,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:13,549 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=30, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:13,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateMultiRegion" procId is: 30 2023-07-12 10:58:13,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=30 2023-07-12 10:58:13,551 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:13,552 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:13,552 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:13,556 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=30, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:13,566 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f 2023-07-12 10:58:13,566 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da 2023-07-12 10:58:13,566 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397 2023-07-12 10:58:13,567 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec 2023-07-12 10:58:13,567 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba 2023-07-12 10:58:13,567 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1 2023-07-12 10:58:13,567 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488 2023-07-12 10:58:13,567 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f empty. 2023-07-12 10:58:13,567 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878 2023-07-12 10:58:13,567 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397 empty. 2023-07-12 10:58:13,567 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da empty. 2023-07-12 10:58:13,568 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1 empty. 2023-07-12 10:58:13,568 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec empty. 2023-07-12 10:58:13,568 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba empty. 2023-07-12 10:58:13,568 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397 2023-07-12 10:58:13,568 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878 empty. 2023-07-12 10:58:13,568 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec 2023-07-12 10:58:13,568 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f 2023-07-12 10:58:13,569 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5 2023-07-12 10:58:13,569 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488 empty. 2023-07-12 10:58:13,569 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1 2023-07-12 10:58:13,569 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec 2023-07-12 10:58:13,569 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da 2023-07-12 10:58:13,569 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878 2023-07-12 10:58:13,569 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec empty. 2023-07-12 10:58:13,569 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba 2023-07-12 10:58:13,569 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488 2023-07-12 10:58:13,570 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5 empty. 2023-07-12 10:58:13,570 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec 2023-07-12 10:58:13,570 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5 2023-07-12 10:58:13,570 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-12 10:58:13,596 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:13,598 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2b5f3d48acb383a31b5a40a5ac6a05da, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:13,598 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => aa4a97a87c6fc1d33907d2bfca429f6f, NAME => 'Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:13,598 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => f33b5d590762fb3bdcf16b4383581397, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:13,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=30 2023-07-12 10:58:13,652 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:13,653 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 2b5f3d48acb383a31b5a40a5ac6a05da, disabling compactions & flushes 2023-07-12 10:58:13,653 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da. 2023-07-12 10:58:13,653 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da. 2023-07-12 10:58:13,653 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da. after waiting 0 ms 2023-07-12 10:58:13,653 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da. 2023-07-12 10:58:13,653 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da. 2023-07-12 10:58:13,653 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 2b5f3d48acb383a31b5a40a5ac6a05da: 2023-07-12 10:58:13,653 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => a3deacea066be39695a4220eb45806ba, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:13,674 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:13,674 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing a3deacea066be39695a4220eb45806ba, disabling compactions & flushes 2023-07-12 10:58:13,674 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba. 2023-07-12 10:58:13,674 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba. 2023-07-12 10:58:13,674 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba. after waiting 0 ms 2023-07-12 10:58:13,674 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba. 2023-07-12 10:58:13,674 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba. 2023-07-12 10:58:13,674 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for a3deacea066be39695a4220eb45806ba: 2023-07-12 10:58:13,675 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => c4c45abdc78a3ec97cd80e52c0a7f6ec, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:13,692 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:13,692 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing c4c45abdc78a3ec97cd80e52c0a7f6ec, disabling compactions & flushes 2023-07-12 10:58:13,693 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec. 2023-07-12 10:58:13,693 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec. 2023-07-12 10:58:13,693 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec. after waiting 0 ms 2023-07-12 10:58:13,693 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec. 2023-07-12 10:58:13,693 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec. 2023-07-12 10:58:13,693 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for c4c45abdc78a3ec97cd80e52c0a7f6ec: 2023-07-12 10:58:13,693 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 30a8043395e1b387772cd1ac0120a878, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:13,710 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:13,710 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 30a8043395e1b387772cd1ac0120a878, disabling compactions & flushes 2023-07-12 10:58:13,711 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878. 2023-07-12 10:58:13,711 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878. 2023-07-12 10:58:13,711 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878. after waiting 0 ms 2023-07-12 10:58:13,711 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878. 2023-07-12 10:58:13,711 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878. 2023-07-12 10:58:13,711 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 30a8043395e1b387772cd1ac0120a878: 2023-07-12 10:58:13,711 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => a9805313c53b56463f2953675bbf0488, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:13,723 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:13,723 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing a9805313c53b56463f2953675bbf0488, disabling compactions & flushes 2023-07-12 10:58:13,723 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488. 2023-07-12 10:58:13,724 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488. 2023-07-12 10:58:13,724 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488. after waiting 0 ms 2023-07-12 10:58:13,724 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488. 2023-07-12 10:58:13,724 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488. 2023-07-12 10:58:13,724 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for a9805313c53b56463f2953675bbf0488: 2023-07-12 10:58:13,724 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 34bfc030588548a8db0ed867f696ebe1, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:13,736 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:13,736 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 34bfc030588548a8db0ed867f696ebe1, disabling compactions & flushes 2023-07-12 10:58:13,736 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1. 2023-07-12 10:58:13,736 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1. 2023-07-12 10:58:13,736 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1. after waiting 0 ms 2023-07-12 10:58:13,736 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1. 2023-07-12 10:58:13,736 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1. 2023-07-12 10:58:13,736 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 34bfc030588548a8db0ed867f696ebe1: 2023-07-12 10:58:13,737 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 40d01a652659184971d17fc8e26316ec, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:13,750 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:13,751 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 40d01a652659184971d17fc8e26316ec, disabling compactions & flushes 2023-07-12 10:58:13,751 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec. 2023-07-12 10:58:13,751 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec. 2023-07-12 10:58:13,751 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec. after waiting 0 ms 2023-07-12 10:58:13,751 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec. 2023-07-12 10:58:13,751 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec. 2023-07-12 10:58:13,751 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 40d01a652659184971d17fc8e26316ec: 2023-07-12 10:58:13,751 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 045026e1d038c146082663535dce70e5, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:13,765 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:13,765 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 045026e1d038c146082663535dce70e5, disabling compactions & flushes 2023-07-12 10:58:13,765 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5. 2023-07-12 10:58:13,765 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5. 2023-07-12 10:58:13,765 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5. after waiting 0 ms 2023-07-12 10:58:13,765 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5. 2023-07-12 10:58:13,765 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5. 2023-07-12 10:58:13,765 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 045026e1d038c146082663535dce70e5: 2023-07-12 10:58:13,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=30 2023-07-12 10:58:14,049 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:14,050 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:14,050 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing aa4a97a87c6fc1d33907d2bfca429f6f, disabling compactions & flushes 2023-07-12 10:58:14,050 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing f33b5d590762fb3bdcf16b4383581397, disabling compactions & flushes 2023-07-12 10:58:14,050 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f. 2023-07-12 10:58:14,050 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397. 2023-07-12 10:58:14,050 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f. 2023-07-12 10:58:14,050 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397. 2023-07-12 10:58:14,050 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f. after waiting 0 ms 2023-07-12 10:58:14,050 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397. after waiting 0 ms 2023-07-12 10:58:14,050 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f. 2023-07-12 10:58:14,050 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397. 2023-07-12 10:58:14,050 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f. 2023-07-12 10:58:14,050 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397. 2023-07-12 10:58:14,050 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for aa4a97a87c6fc1d33907d2bfca429f6f: 2023-07-12 10:58:14,050 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for f33b5d590762fb3bdcf16b4383581397: 2023-07-12 10:58:14,054 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=30, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:14,056 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494055"}]},"ts":"1689159494055"} 2023-07-12 10:58:14,056 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494055"}]},"ts":"1689159494055"} 2023-07-12 10:58:14,056 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494055"}]},"ts":"1689159494055"} 2023-07-12 10:58:14,056 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689159493545.30a8043395e1b387772cd1ac0120a878.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494055"}]},"ts":"1689159494055"} 2023-07-12 10:58:14,056 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689159493545.a9805313c53b56463f2953675bbf0488.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494055"}]},"ts":"1689159494055"} 2023-07-12 10:58:14,056 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494055"}]},"ts":"1689159494055"} 2023-07-12 10:58:14,056 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689159493545.40d01a652659184971d17fc8e26316ec.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494055"}]},"ts":"1689159494055"} 2023-07-12 10:58:14,056 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689159493545.045026e1d038c146082663535dce70e5.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159494055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494055"}]},"ts":"1689159494055"} 2023-07-12 10:58:14,056 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159494055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494055"}]},"ts":"1689159494055"} 2023-07-12 10:58:14,057 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689159493545.f33b5d590762fb3bdcf16b4383581397.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494055"}]},"ts":"1689159494055"} 2023-07-12 10:58:14,061 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 10 regions to meta. 2023-07-12 10:58:14,062 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=30, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:14,062 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159494062"}]},"ts":"1689159494062"} 2023-07-12 10:58:14,064 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLING in hbase:meta 2023-07-12 10:58:14,067 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:14,067 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:14,067 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:14,067 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:14,067 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 10:58:14,067 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:14,068 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aa4a97a87c6fc1d33907d2bfca429f6f, ASSIGN}, {pid=32, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2b5f3d48acb383a31b5a40a5ac6a05da, ASSIGN}, {pid=33, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33b5d590762fb3bdcf16b4383581397, ASSIGN}, {pid=34, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a3deacea066be39695a4220eb45806ba, ASSIGN}, {pid=35, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c4c45abdc78a3ec97cd80e52c0a7f6ec, ASSIGN}, {pid=36, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=30a8043395e1b387772cd1ac0120a878, ASSIGN}, {pid=37, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a9805313c53b56463f2953675bbf0488, ASSIGN}, {pid=38, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=34bfc030588548a8db0ed867f696ebe1, ASSIGN}, {pid=39, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=40d01a652659184971d17fc8e26316ec, ASSIGN}, {pid=40, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=045026e1d038c146082663535dce70e5, ASSIGN}] 2023-07-12 10:58:14,071 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=36, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=30a8043395e1b387772cd1ac0120a878, ASSIGN 2023-07-12 10:58:14,071 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c4c45abdc78a3ec97cd80e52c0a7f6ec, ASSIGN 2023-07-12 10:58:14,072 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a3deacea066be39695a4220eb45806ba, ASSIGN 2023-07-12 10:58:14,072 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33b5d590762fb3bdcf16b4383581397, ASSIGN 2023-07-12 10:58:14,073 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=36, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=30a8043395e1b387772cd1ac0120a878, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,42501,1689159484335; forceNewPlan=false, retain=false 2023-07-12 10:58:14,073 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=35, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c4c45abdc78a3ec97cd80e52c0a7f6ec, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,45597,1689159484713; forceNewPlan=false, retain=false 2023-07-12 10:58:14,073 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=33, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33b5d590762fb3bdcf16b4383581397, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43635,1689159491271; forceNewPlan=false, retain=false 2023-07-12 10:58:14,073 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=40, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=045026e1d038c146082663535dce70e5, ASSIGN 2023-07-12 10:58:14,073 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=34, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a3deacea066be39695a4220eb45806ba, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43635,1689159491271; forceNewPlan=false, retain=false 2023-07-12 10:58:14,074 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=40d01a652659184971d17fc8e26316ec, ASSIGN 2023-07-12 10:58:14,075 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=38, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=34bfc030588548a8db0ed867f696ebe1, ASSIGN 2023-07-12 10:58:14,075 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=40, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=045026e1d038c146082663535dce70e5, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,42501,1689159484335; forceNewPlan=false, retain=false 2023-07-12 10:58:14,075 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=37, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a9805313c53b56463f2953675bbf0488, ASSIGN 2023-07-12 10:58:14,075 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2b5f3d48acb383a31b5a40a5ac6a05da, ASSIGN 2023-07-12 10:58:14,076 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=39, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=40d01a652659184971d17fc8e26316ec, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,45597,1689159484713; forceNewPlan=false, retain=false 2023-07-12 10:58:14,076 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=38, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=34bfc030588548a8db0ed867f696ebe1, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43635,1689159491271; forceNewPlan=false, retain=false 2023-07-12 10:58:14,076 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aa4a97a87c6fc1d33907d2bfca429f6f, ASSIGN 2023-07-12 10:58:14,076 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=32, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2b5f3d48acb383a31b5a40a5ac6a05da, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43117,1689159488336; forceNewPlan=false, retain=false 2023-07-12 10:58:14,076 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=37, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a9805313c53b56463f2953675bbf0488, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43117,1689159488336; forceNewPlan=false, retain=false 2023-07-12 10:58:14,077 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=31, ppid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aa4a97a87c6fc1d33907d2bfca429f6f, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43117,1689159488336; forceNewPlan=false, retain=false 2023-07-12 10:58:14,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=30 2023-07-12 10:58:14,223 INFO [jenkins-hbase9:41017] balancer.BaseLoadBalancer(1545): Reassigned 10 regions. 10 retained the pre-restart assignment. 2023-07-12 10:58:14,231 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=40d01a652659184971d17fc8e26316ec, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:14,231 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689159493545.40d01a652659184971d17fc8e26316ec.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494231"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494231"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494231"}]},"ts":"1689159494231"} 2023-07-12 10:58:14,232 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=c4c45abdc78a3ec97cd80e52c0a7f6ec, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:14,232 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=a3deacea066be39695a4220eb45806ba, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:14,232 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494232"}]},"ts":"1689159494232"} 2023-07-12 10:58:14,232 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=34bfc030588548a8db0ed867f696ebe1, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:14,233 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494232"}]},"ts":"1689159494232"} 2023-07-12 10:58:14,232 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494232"}]},"ts":"1689159494232"} 2023-07-12 10:58:14,233 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f33b5d590762fb3bdcf16b4383581397, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:14,235 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689159493545.f33b5d590762fb3bdcf16b4383581397.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494233"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494233"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494233"}]},"ts":"1689159494233"} 2023-07-12 10:58:14,237 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=39, state=RUNNABLE; OpenRegionProcedure 40d01a652659184971d17fc8e26316ec, server=jenkins-hbase9.apache.org,45597,1689159484713}] 2023-07-12 10:58:14,239 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=35, state=RUNNABLE; OpenRegionProcedure c4c45abdc78a3ec97cd80e52c0a7f6ec, server=jenkins-hbase9.apache.org,45597,1689159484713}] 2023-07-12 10:58:14,240 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=38, state=RUNNABLE; OpenRegionProcedure 34bfc030588548a8db0ed867f696ebe1, server=jenkins-hbase9.apache.org,43635,1689159491271}] 2023-07-12 10:58:14,242 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=34, state=RUNNABLE; OpenRegionProcedure a3deacea066be39695a4220eb45806ba, server=jenkins-hbase9.apache.org,43635,1689159491271}] 2023-07-12 10:58:14,243 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=33, state=RUNNABLE; OpenRegionProcedure f33b5d590762fb3bdcf16b4383581397, server=jenkins-hbase9.apache.org,43635,1689159491271}] 2023-07-12 10:58:14,244 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=a9805313c53b56463f2953675bbf0488, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:14,244 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689159493545.a9805313c53b56463f2953675bbf0488.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494244"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494244"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494244"}]},"ts":"1689159494244"} 2023-07-12 10:58:14,247 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=37, state=RUNNABLE; OpenRegionProcedure a9805313c53b56463f2953675bbf0488, server=jenkins-hbase9.apache.org,43117,1689159488336}] 2023-07-12 10:58:14,250 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=2b5f3d48acb383a31b5a40a5ac6a05da, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:14,250 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494250"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494250"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494250"}]},"ts":"1689159494250"} 2023-07-12 10:58:14,251 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=aa4a97a87c6fc1d33907d2bfca429f6f, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:14,251 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159494251"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494251"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494251"}]},"ts":"1689159494251"} 2023-07-12 10:58:14,252 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=045026e1d038c146082663535dce70e5, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:14,252 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689159493545.045026e1d038c146082663535dce70e5.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159494252"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494252"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494252"}]},"ts":"1689159494252"} 2023-07-12 10:58:14,253 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=32, state=RUNNABLE; OpenRegionProcedure 2b5f3d48acb383a31b5a40a5ac6a05da, server=jenkins-hbase9.apache.org,43117,1689159488336}] 2023-07-12 10:58:14,255 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=31, state=RUNNABLE; OpenRegionProcedure aa4a97a87c6fc1d33907d2bfca429f6f, server=jenkins-hbase9.apache.org,43117,1689159488336}] 2023-07-12 10:58:14,261 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=40, state=RUNNABLE; OpenRegionProcedure 045026e1d038c146082663535dce70e5, server=jenkins-hbase9.apache.org,42501,1689159484335}] 2023-07-12 10:58:14,262 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=30a8043395e1b387772cd1ac0120a878, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:14,262 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689159493545.30a8043395e1b387772cd1ac0120a878.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494262"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494262"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494262"}]},"ts":"1689159494262"} 2023-07-12 10:58:14,265 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=36, state=RUNNABLE; OpenRegionProcedure 30a8043395e1b387772cd1ac0120a878, server=jenkins-hbase9.apache.org,42501,1689159484335}] 2023-07-12 10:58:14,390 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:14,391 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:14,392 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:40122, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:14,395 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:14,396 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:14,397 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:36434, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:14,397 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec. 2023-07-12 10:58:14,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c4c45abdc78a3ec97cd80e52c0a7f6ec, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'} 2023-07-12 10:58:14,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion c4c45abdc78a3ec97cd80e52c0a7f6ec 2023-07-12 10:58:14,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:14,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for c4c45abdc78a3ec97cd80e52c0a7f6ec 2023-07-12 10:58:14,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for c4c45abdc78a3ec97cd80e52c0a7f6ec 2023-07-12 10:58:14,399 INFO [StoreOpener-c4c45abdc78a3ec97cd80e52c0a7f6ec-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c4c45abdc78a3ec97cd80e52c0a7f6ec 2023-07-12 10:58:14,401 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1. 2023-07-12 10:58:14,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 34bfc030588548a8db0ed867f696ebe1, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'} 2023-07-12 10:58:14,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 34bfc030588548a8db0ed867f696ebe1 2023-07-12 10:58:14,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:14,402 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 34bfc030588548a8db0ed867f696ebe1 2023-07-12 10:58:14,402 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 34bfc030588548a8db0ed867f696ebe1 2023-07-12 10:58:14,402 DEBUG [StoreOpener-c4c45abdc78a3ec97cd80e52c0a7f6ec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec/f 2023-07-12 10:58:14,402 DEBUG [StoreOpener-c4c45abdc78a3ec97cd80e52c0a7f6ec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec/f 2023-07-12 10:58:14,403 INFO [StoreOpener-34bfc030588548a8db0ed867f696ebe1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 34bfc030588548a8db0ed867f696ebe1 2023-07-12 10:58:14,404 INFO [StoreOpener-c4c45abdc78a3ec97cd80e52c0a7f6ec-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c4c45abdc78a3ec97cd80e52c0a7f6ec columnFamilyName f 2023-07-12 10:58:14,405 INFO [StoreOpener-c4c45abdc78a3ec97cd80e52c0a7f6ec-1] regionserver.HStore(310): Store=c4c45abdc78a3ec97cd80e52c0a7f6ec/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:14,405 DEBUG [StoreOpener-34bfc030588548a8db0ed867f696ebe1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1/f 2023-07-12 10:58:14,406 DEBUG [StoreOpener-34bfc030588548a8db0ed867f696ebe1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1/f 2023-07-12 10:58:14,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec 2023-07-12 10:58:14,406 INFO [StoreOpener-34bfc030588548a8db0ed867f696ebe1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 34bfc030588548a8db0ed867f696ebe1 columnFamilyName f 2023-07-12 10:58:14,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec 2023-07-12 10:58:14,407 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f. 2023-07-12 10:58:14,410 INFO [StoreOpener-34bfc030588548a8db0ed867f696ebe1-1] regionserver.HStore(310): Store=34bfc030588548a8db0ed867f696ebe1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:14,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aa4a97a87c6fc1d33907d2bfca429f6f, NAME => 'Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'} 2023-07-12 10:58:14,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion aa4a97a87c6fc1d33907d2bfca429f6f 2023-07-12 10:58:14,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:14,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for aa4a97a87c6fc1d33907d2bfca429f6f 2023-07-12 10:58:14,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for aa4a97a87c6fc1d33907d2bfca429f6f 2023-07-12 10:58:14,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1 2023-07-12 10:58:14,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1 2023-07-12 10:58:14,412 INFO [StoreOpener-aa4a97a87c6fc1d33907d2bfca429f6f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region aa4a97a87c6fc1d33907d2bfca429f6f 2023-07-12 10:58:14,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for c4c45abdc78a3ec97cd80e52c0a7f6ec 2023-07-12 10:58:14,413 DEBUG [StoreOpener-aa4a97a87c6fc1d33907d2bfca429f6f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f/f 2023-07-12 10:58:14,413 DEBUG [StoreOpener-aa4a97a87c6fc1d33907d2bfca429f6f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f/f 2023-07-12 10:58:14,414 INFO [StoreOpener-aa4a97a87c6fc1d33907d2bfca429f6f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aa4a97a87c6fc1d33907d2bfca429f6f columnFamilyName f 2023-07-12 10:58:14,415 INFO [StoreOpener-aa4a97a87c6fc1d33907d2bfca429f6f-1] regionserver.HStore(310): Store=aa4a97a87c6fc1d33907d2bfca429f6f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:14,415 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 34bfc030588548a8db0ed867f696ebe1 2023-07-12 10:58:14,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:14,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f 2023-07-12 10:58:14,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f 2023-07-12 10:58:14,420 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened c4c45abdc78a3ec97cd80e52c0a7f6ec; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11043749440, jitterRate=0.028529316186904907}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:14,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for c4c45abdc78a3ec97cd80e52c0a7f6ec: 2023-07-12 10:58:14,423 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec., pid=42, masterSystemTime=1689159494390 2023-07-12 10:58:14,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for aa4a97a87c6fc1d33907d2bfca429f6f 2023-07-12 10:58:14,429 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878. 2023-07-12 10:58:14,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 30a8043395e1b387772cd1ac0120a878, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'} 2023-07-12 10:58:14,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 30a8043395e1b387772cd1ac0120a878 2023-07-12 10:58:14,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:14,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 30a8043395e1b387772cd1ac0120a878 2023-07-12 10:58:14,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 30a8043395e1b387772cd1ac0120a878 2023-07-12 10:58:14,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:14,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec. 2023-07-12 10:58:14,432 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec. 2023-07-12 10:58:14,432 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec. 2023-07-12 10:58:14,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 40d01a652659184971d17fc8e26316ec, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'} 2023-07-12 10:58:14,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 40d01a652659184971d17fc8e26316ec 2023-07-12 10:58:14,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:14,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 40d01a652659184971d17fc8e26316ec 2023-07-12 10:58:14,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 40d01a652659184971d17fc8e26316ec 2023-07-12 10:58:14,434 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 34bfc030588548a8db0ed867f696ebe1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10427132320, jitterRate=-0.02889762818813324}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:14,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 34bfc030588548a8db0ed867f696ebe1: 2023-07-12 10:58:14,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:14,434 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=c4c45abdc78a3ec97cd80e52c0a7f6ec, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:14,434 INFO [StoreOpener-30a8043395e1b387772cd1ac0120a878-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 30a8043395e1b387772cd1ac0120a878 2023-07-12 10:58:14,434 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494434"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159494434"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159494434"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159494434"}]},"ts":"1689159494434"} 2023-07-12 10:58:14,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1., pid=43, masterSystemTime=1689159494395 2023-07-12 10:58:14,437 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened aa4a97a87c6fc1d33907d2bfca429f6f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9930113440, jitterRate=-0.0751861184835434}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:14,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for aa4a97a87c6fc1d33907d2bfca429f6f: 2023-07-12 10:58:14,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f., pid=48, masterSystemTime=1689159494402 2023-07-12 10:58:14,438 INFO [StoreOpener-40d01a652659184971d17fc8e26316ec-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 40d01a652659184971d17fc8e26316ec 2023-07-12 10:58:14,439 DEBUG [StoreOpener-30a8043395e1b387772cd1ac0120a878-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878/f 2023-07-12 10:58:14,439 DEBUG [StoreOpener-30a8043395e1b387772cd1ac0120a878-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878/f 2023-07-12 10:58:14,439 INFO [StoreOpener-30a8043395e1b387772cd1ac0120a878-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 30a8043395e1b387772cd1ac0120a878 columnFamilyName f 2023-07-12 10:58:14,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1. 2023-07-12 10:58:14,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1. 2023-07-12 10:58:14,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba. 2023-07-12 10:58:14,441 INFO [StoreOpener-30a8043395e1b387772cd1ac0120a878-1] regionserver.HStore(310): Store=30a8043395e1b387772cd1ac0120a878/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:14,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a3deacea066be39695a4220eb45806ba, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'} 2023-07-12 10:58:14,442 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=34bfc030588548a8db0ed867f696ebe1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:14,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion a3deacea066be39695a4220eb45806ba 2023-07-12 10:58:14,442 DEBUG [StoreOpener-40d01a652659184971d17fc8e26316ec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec/f 2023-07-12 10:58:14,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:14,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878 2023-07-12 10:58:14,442 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494442"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159494442"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159494442"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159494442"}]},"ts":"1689159494442"} 2023-07-12 10:58:14,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f. 2023-07-12 10:58:14,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for a3deacea066be39695a4220eb45806ba 2023-07-12 10:58:14,443 DEBUG [StoreOpener-40d01a652659184971d17fc8e26316ec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec/f 2023-07-12 10:58:14,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for a3deacea066be39695a4220eb45806ba 2023-07-12 10:58:14,443 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f. 2023-07-12 10:58:14,443 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da. 2023-07-12 10:58:14,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2b5f3d48acb383a31b5a40a5ac6a05da, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('} 2023-07-12 10:58:14,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878 2023-07-12 10:58:14,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 2b5f3d48acb383a31b5a40a5ac6a05da 2023-07-12 10:58:14,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:14,444 INFO [StoreOpener-40d01a652659184971d17fc8e26316ec-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 40d01a652659184971d17fc8e26316ec columnFamilyName f 2023-07-12 10:58:14,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 2b5f3d48acb383a31b5a40a5ac6a05da 2023-07-12 10:58:14,444 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=35 2023-07-12 10:58:14,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 2b5f3d48acb383a31b5a40a5ac6a05da 2023-07-12 10:58:14,444 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=35, state=SUCCESS; OpenRegionProcedure c4c45abdc78a3ec97cd80e52c0a7f6ec, server=jenkins-hbase9.apache.org,45597,1689159484713 in 199 msec 2023-07-12 10:58:14,445 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=aa4a97a87c6fc1d33907d2bfca429f6f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:14,445 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159494445"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159494445"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159494445"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159494445"}]},"ts":"1689159494445"} 2023-07-12 10:58:14,445 INFO [StoreOpener-40d01a652659184971d17fc8e26316ec-1] regionserver.HStore(310): Store=40d01a652659184971d17fc8e26316ec/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:14,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec 2023-07-12 10:58:14,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec 2023-07-12 10:58:14,449 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c4c45abdc78a3ec97cd80e52c0a7f6ec, ASSIGN in 376 msec 2023-07-12 10:58:14,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 30a8043395e1b387772cd1ac0120a878 2023-07-12 10:58:14,451 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=31 2023-07-12 10:58:14,451 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=38 2023-07-12 10:58:14,451 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=31, state=SUCCESS; OpenRegionProcedure aa4a97a87c6fc1d33907d2bfca429f6f, server=jenkins-hbase9.apache.org,43117,1689159488336 in 193 msec 2023-07-12 10:58:14,451 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=38, state=SUCCESS; OpenRegionProcedure 34bfc030588548a8db0ed867f696ebe1, server=jenkins-hbase9.apache.org,43635,1689159491271 in 206 msec 2023-07-12 10:58:14,453 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=34bfc030588548a8db0ed867f696ebe1, ASSIGN in 383 msec 2023-07-12 10:58:14,454 INFO [StoreOpener-2b5f3d48acb383a31b5a40a5ac6a05da-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2b5f3d48acb383a31b5a40a5ac6a05da 2023-07-12 10:58:14,454 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aa4a97a87c6fc1d33907d2bfca429f6f, ASSIGN in 383 msec 2023-07-12 10:58:14,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 40d01a652659184971d17fc8e26316ec 2023-07-12 10:58:14,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:14,458 INFO [StoreOpener-a3deacea066be39695a4220eb45806ba-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a3deacea066be39695a4220eb45806ba 2023-07-12 10:58:14,458 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 30a8043395e1b387772cd1ac0120a878; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11158093440, jitterRate=0.039178431034088135}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:14,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 30a8043395e1b387772cd1ac0120a878: 2023-07-12 10:58:14,459 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878., pid=50, masterSystemTime=1689159494419 2023-07-12 10:58:14,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878. 2023-07-12 10:58:14,461 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878. 2023-07-12 10:58:14,461 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5. 2023-07-12 10:58:14,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 045026e1d038c146082663535dce70e5, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''} 2023-07-12 10:58:14,462 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=30a8043395e1b387772cd1ac0120a878, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:14,462 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689159493545.30a8043395e1b387772cd1ac0120a878.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494462"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159494462"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159494462"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159494462"}]},"ts":"1689159494462"} 2023-07-12 10:58:14,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 045026e1d038c146082663535dce70e5 2023-07-12 10:58:14,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:14,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 045026e1d038c146082663535dce70e5 2023-07-12 10:58:14,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 045026e1d038c146082663535dce70e5 2023-07-12 10:58:14,467 DEBUG [StoreOpener-2b5f3d48acb383a31b5a40a5ac6a05da-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da/f 2023-07-12 10:58:14,467 DEBUG [StoreOpener-2b5f3d48acb383a31b5a40a5ac6a05da-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da/f 2023-07-12 10:58:14,468 INFO [StoreOpener-2b5f3d48acb383a31b5a40a5ac6a05da-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2b5f3d48acb383a31b5a40a5ac6a05da columnFamilyName f 2023-07-12 10:58:14,470 INFO [StoreOpener-2b5f3d48acb383a31b5a40a5ac6a05da-1] regionserver.HStore(310): Store=2b5f3d48acb383a31b5a40a5ac6a05da/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:14,470 DEBUG [StoreOpener-a3deacea066be39695a4220eb45806ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba/f 2023-07-12 10:58:14,470 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=36 2023-07-12 10:58:14,470 DEBUG [StoreOpener-a3deacea066be39695a4220eb45806ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba/f 2023-07-12 10:58:14,470 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=36, state=SUCCESS; OpenRegionProcedure 30a8043395e1b387772cd1ac0120a878, server=jenkins-hbase9.apache.org,42501,1689159484335 in 200 msec 2023-07-12 10:58:14,471 INFO [StoreOpener-a3deacea066be39695a4220eb45806ba-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a3deacea066be39695a4220eb45806ba columnFamilyName f 2023-07-12 10:58:14,472 INFO [StoreOpener-a3deacea066be39695a4220eb45806ba-1] regionserver.HStore(310): Store=a3deacea066be39695a4220eb45806ba/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:14,472 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=30a8043395e1b387772cd1ac0120a878, ASSIGN in 402 msec 2023-07-12 10:58:14,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:14,474 INFO [StoreOpener-045026e1d038c146082663535dce70e5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 045026e1d038c146082663535dce70e5 2023-07-12 10:58:14,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba 2023-07-12 10:58:14,475 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 40d01a652659184971d17fc8e26316ec; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11687929440, jitterRate=0.0885232537984848}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:14,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 40d01a652659184971d17fc8e26316ec: 2023-07-12 10:58:14,476 DEBUG [StoreOpener-045026e1d038c146082663535dce70e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5/f 2023-07-12 10:58:14,476 DEBUG [StoreOpener-045026e1d038c146082663535dce70e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5/f 2023-07-12 10:58:14,477 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba 2023-07-12 10:58:14,477 INFO [StoreOpener-045026e1d038c146082663535dce70e5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 045026e1d038c146082663535dce70e5 columnFamilyName f 2023-07-12 10:58:14,477 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da 2023-07-12 10:58:14,478 INFO [StoreOpener-045026e1d038c146082663535dce70e5-1] regionserver.HStore(310): Store=045026e1d038c146082663535dce70e5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:14,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da 2023-07-12 10:58:14,478 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec., pid=41, masterSystemTime=1689159494390 2023-07-12 10:58:14,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5 2023-07-12 10:58:14,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec. 2023-07-12 10:58:14,481 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec. 2023-07-12 10:58:14,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5 2023-07-12 10:58:14,482 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=40d01a652659184971d17fc8e26316ec, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:14,482 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689159493545.40d01a652659184971d17fc8e26316ec.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494482"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159494482"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159494482"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159494482"}]},"ts":"1689159494482"} 2023-07-12 10:58:14,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for a3deacea066be39695a4220eb45806ba 2023-07-12 10:58:14,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 2b5f3d48acb383a31b5a40a5ac6a05da 2023-07-12 10:58:14,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:14,488 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 045026e1d038c146082663535dce70e5 2023-07-12 10:58:14,488 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:14,488 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened a3deacea066be39695a4220eb45806ba; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10772231520, jitterRate=0.0032422393560409546}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:14,488 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for a3deacea066be39695a4220eb45806ba: 2023-07-12 10:58:14,488 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=39 2023-07-12 10:58:14,488 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=39, state=SUCCESS; OpenRegionProcedure 40d01a652659184971d17fc8e26316ec, server=jenkins-hbase9.apache.org,45597,1689159484713 in 247 msec 2023-07-12 10:58:14,489 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 2b5f3d48acb383a31b5a40a5ac6a05da; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11992399360, jitterRate=0.11687922477722168}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:14,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 2b5f3d48acb383a31b5a40a5ac6a05da: 2023-07-12 10:58:14,489 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba., pid=44, masterSystemTime=1689159494395 2023-07-12 10:58:14,491 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=40d01a652659184971d17fc8e26316ec, ASSIGN in 420 msec 2023-07-12 10:58:14,491 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da., pid=47, masterSystemTime=1689159494402 2023-07-12 10:58:14,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:14,496 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 045026e1d038c146082663535dce70e5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10767009280, jitterRate=0.002755880355834961}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:14,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 045026e1d038c146082663535dce70e5: 2023-07-12 10:58:14,497 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5., pid=49, masterSystemTime=1689159494419 2023-07-12 10:58:14,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba. 2023-07-12 10:58:14,502 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba. 2023-07-12 10:58:14,502 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397. 2023-07-12 10:58:14,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f33b5d590762fb3bdcf16b4383581397, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'} 2023-07-12 10:58:14,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion f33b5d590762fb3bdcf16b4383581397 2023-07-12 10:58:14,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:14,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for f33b5d590762fb3bdcf16b4383581397 2023-07-12 10:58:14,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for f33b5d590762fb3bdcf16b4383581397 2023-07-12 10:58:14,504 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=a3deacea066be39695a4220eb45806ba, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:14,505 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494504"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159494504"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159494504"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159494504"}]},"ts":"1689159494504"} 2023-07-12 10:58:14,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da. 2023-07-12 10:58:14,507 INFO [StoreOpener-f33b5d590762fb3bdcf16b4383581397-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f33b5d590762fb3bdcf16b4383581397 2023-07-12 10:58:14,507 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da. 2023-07-12 10:58:14,508 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488. 2023-07-12 10:58:14,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a9805313c53b56463f2953675bbf0488, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'} 2023-07-12 10:58:14,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion a9805313c53b56463f2953675bbf0488 2023-07-12 10:58:14,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:14,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for a9805313c53b56463f2953675bbf0488 2023-07-12 10:58:14,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for a9805313c53b56463f2953675bbf0488 2023-07-12 10:58:14,510 DEBUG [StoreOpener-f33b5d590762fb3bdcf16b4383581397-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397/f 2023-07-12 10:58:14,510 DEBUG [StoreOpener-f33b5d590762fb3bdcf16b4383581397-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397/f 2023-07-12 10:58:14,510 INFO [StoreOpener-f33b5d590762fb3bdcf16b4383581397-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f33b5d590762fb3bdcf16b4383581397 columnFamilyName f 2023-07-12 10:58:14,511 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=2b5f3d48acb383a31b5a40a5ac6a05da, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:14,511 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494510"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159494510"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159494510"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159494510"}]},"ts":"1689159494510"} 2023-07-12 10:58:14,511 INFO [StoreOpener-a9805313c53b56463f2953675bbf0488-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a9805313c53b56463f2953675bbf0488 2023-07-12 10:58:14,511 INFO [StoreOpener-f33b5d590762fb3bdcf16b4383581397-1] regionserver.HStore(310): Store=f33b5d590762fb3bdcf16b4383581397/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:14,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397 2023-07-12 10:58:14,513 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=045026e1d038c146082663535dce70e5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:14,513 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689159493545.045026e1d038c146082663535dce70e5.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159494512"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159494512"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159494512"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159494512"}]},"ts":"1689159494512"} 2023-07-12 10:58:14,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397 2023-07-12 10:58:14,515 DEBUG [StoreOpener-a9805313c53b56463f2953675bbf0488-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488/f 2023-07-12 10:58:14,515 DEBUG [StoreOpener-a9805313c53b56463f2953675bbf0488-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488/f 2023-07-12 10:58:14,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5. 2023-07-12 10:58:14,515 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5. 2023-07-12 10:58:14,517 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=34 2023-07-12 10:58:14,517 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=34, state=SUCCESS; OpenRegionProcedure a3deacea066be39695a4220eb45806ba, server=jenkins-hbase9.apache.org,43635,1689159491271 in 268 msec 2023-07-12 10:58:14,520 INFO [StoreOpener-a9805313c53b56463f2953675bbf0488-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a9805313c53b56463f2953675bbf0488 columnFamilyName f 2023-07-12 10:58:14,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for f33b5d590762fb3bdcf16b4383581397 2023-07-12 10:58:14,523 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a3deacea066be39695a4220eb45806ba, ASSIGN in 449 msec 2023-07-12 10:58:14,523 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=32 2023-07-12 10:58:14,523 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=32, state=SUCCESS; OpenRegionProcedure 2b5f3d48acb383a31b5a40a5ac6a05da, server=jenkins-hbase9.apache.org,43117,1689159488336 in 262 msec 2023-07-12 10:58:14,524 INFO [StoreOpener-a9805313c53b56463f2953675bbf0488-1] regionserver.HStore(310): Store=a9805313c53b56463f2953675bbf0488/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:14,525 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=40 2023-07-12 10:58:14,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488 2023-07-12 10:58:14,525 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=40, state=SUCCESS; OpenRegionProcedure 045026e1d038c146082663535dce70e5, server=jenkins-hbase9.apache.org,42501,1689159484335 in 255 msec 2023-07-12 10:58:14,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488 2023-07-12 10:58:14,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:14,528 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened f33b5d590762fb3bdcf16b4383581397; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11282053120, jitterRate=0.05072307586669922}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:14,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for f33b5d590762fb3bdcf16b4383581397: 2023-07-12 10:58:14,529 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397., pid=45, masterSystemTime=1689159494395 2023-07-12 10:58:14,530 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2b5f3d48acb383a31b5a40a5ac6a05da, ASSIGN in 455 msec 2023-07-12 10:58:14,530 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=045026e1d038c146082663535dce70e5, ASSIGN in 458 msec 2023-07-12 10:58:14,531 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397. 2023-07-12 10:58:14,531 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397. 2023-07-12 10:58:14,531 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for a9805313c53b56463f2953675bbf0488 2023-07-12 10:58:14,532 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f33b5d590762fb3bdcf16b4383581397, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:14,532 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689159493545.f33b5d590762fb3bdcf16b4383581397.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494532"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159494532"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159494532"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159494532"}]},"ts":"1689159494532"} 2023-07-12 10:58:14,536 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=33 2023-07-12 10:58:14,536 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=33, state=SUCCESS; OpenRegionProcedure f33b5d590762fb3bdcf16b4383581397, server=jenkins-hbase9.apache.org,43635,1689159491271 in 291 msec 2023-07-12 10:58:14,538 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33b5d590762fb3bdcf16b4383581397, ASSIGN in 468 msec 2023-07-12 10:58:14,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:14,543 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened a9805313c53b56463f2953675bbf0488; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10376175840, jitterRate=-0.033643320202827454}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:14,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for a9805313c53b56463f2953675bbf0488: 2023-07-12 10:58:14,544 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488., pid=46, masterSystemTime=1689159494402 2023-07-12 10:58:14,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488. 2023-07-12 10:58:14,546 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488. 2023-07-12 10:58:14,547 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=a9805313c53b56463f2953675bbf0488, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:14,547 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689159493545.a9805313c53b56463f2953675bbf0488.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494547"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159494547"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159494547"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159494547"}]},"ts":"1689159494547"} 2023-07-12 10:58:14,552 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=37 2023-07-12 10:58:14,552 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=37, state=SUCCESS; OpenRegionProcedure a9805313c53b56463f2953675bbf0488, server=jenkins-hbase9.apache.org,43117,1689159488336 in 302 msec 2023-07-12 10:58:14,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=30 2023-07-12 10:58:14,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a9805313c53b56463f2953675bbf0488, ASSIGN in 484 msec 2023-07-12 10:58:14,556 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=30, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:14,556 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159494556"}]},"ts":"1689159494556"} 2023-07-12 10:58:14,558 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLED in hbase:meta 2023-07-12 10:58:14,561 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=30, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:14,564 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion in 1.0160 sec 2023-07-12 10:58:14,606 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testCreateMultiRegion' 2023-07-12 10:58:14,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=30 2023-07-12 10:58:14,658 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateMultiRegion, procId: 30 completed 2023-07-12 10:58:14,659 DEBUG [Listener at localhost/44831] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateMultiRegion get assigned. Timeout = 60000ms 2023-07-12 10:58:14,660 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:14,661 WARN [RPCClient-NioEventLoopGroup-6-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase9.apache.org/172.31.2.10:39623 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:39623 Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hbase.thirdparty.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:14,661 DEBUG [RPCClient-NioEventLoopGroup-6-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase9.apache.org/172.31.2.10:39623 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:39623 2023-07-12 10:58:14,765 DEBUG [hconnection-0xddfa172-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:14,766 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:59326, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:14,773 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateMultiRegion assigned to meta. Checking AM states. 2023-07-12 10:58:14,773 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:14,774 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateMultiRegion assigned. 2023-07-12 10:58:14,775 INFO [Listener at localhost/44831] client.HBaseAdmin$15(890): Started disable of Group_testCreateMultiRegion 2023-07-12 10:58:14,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testCreateMultiRegion 2023-07-12 10:58:14,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=51, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:14,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=51 2023-07-12 10:58:14,781 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159494781"}]},"ts":"1689159494781"} 2023-07-12 10:58:14,783 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLING in hbase:meta 2023-07-12 10:58:14,785 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testCreateMultiRegion to state=DISABLING 2023-07-12 10:58:14,788 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=52, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2b5f3d48acb383a31b5a40a5ac6a05da, UNASSIGN}, {pid=53, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33b5d590762fb3bdcf16b4383581397, UNASSIGN}, {pid=54, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a3deacea066be39695a4220eb45806ba, UNASSIGN}, {pid=55, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c4c45abdc78a3ec97cd80e52c0a7f6ec, UNASSIGN}, {pid=56, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=30a8043395e1b387772cd1ac0120a878, UNASSIGN}, {pid=57, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a9805313c53b56463f2953675bbf0488, UNASSIGN}, {pid=58, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=34bfc030588548a8db0ed867f696ebe1, UNASSIGN}, {pid=59, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=40d01a652659184971d17fc8e26316ec, UNASSIGN}, {pid=60, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=045026e1d038c146082663535dce70e5, UNASSIGN}, {pid=61, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aa4a97a87c6fc1d33907d2bfca429f6f, UNASSIGN}] 2023-07-12 10:58:14,790 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=045026e1d038c146082663535dce70e5, UNASSIGN 2023-07-12 10:58:14,790 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=40d01a652659184971d17fc8e26316ec, UNASSIGN 2023-07-12 10:58:14,791 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a9805313c53b56463f2953675bbf0488, UNASSIGN 2023-07-12 10:58:14,791 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aa4a97a87c6fc1d33907d2bfca429f6f, UNASSIGN 2023-07-12 10:58:14,791 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=34bfc030588548a8db0ed867f696ebe1, UNASSIGN 2023-07-12 10:58:14,792 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=045026e1d038c146082663535dce70e5, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:14,792 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=a9805313c53b56463f2953675bbf0488, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:14,792 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689159493545.045026e1d038c146082663535dce70e5.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159494792"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494792"}]},"ts":"1689159494792"} 2023-07-12 10:58:14,792 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=aa4a97a87c6fc1d33907d2bfca429f6f, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:14,792 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=34bfc030588548a8db0ed867f696ebe1, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:14,792 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689159493545.a9805313c53b56463f2953675bbf0488.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494792"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494792"}]},"ts":"1689159494792"} 2023-07-12 10:58:14,793 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494792"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494792"}]},"ts":"1689159494792"} 2023-07-12 10:58:14,793 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=40d01a652659184971d17fc8e26316ec, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:14,793 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159494792"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494792"}]},"ts":"1689159494792"} 2023-07-12 10:58:14,793 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689159493545.40d01a652659184971d17fc8e26316ec.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494793"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494793"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494793"}]},"ts":"1689159494793"} 2023-07-12 10:58:14,794 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=60, state=RUNNABLE; CloseRegionProcedure 045026e1d038c146082663535dce70e5, server=jenkins-hbase9.apache.org,42501,1689159484335}] 2023-07-12 10:58:14,796 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=57, state=RUNNABLE; CloseRegionProcedure a9805313c53b56463f2953675bbf0488, server=jenkins-hbase9.apache.org,43117,1689159488336}] 2023-07-12 10:58:14,797 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=58, state=RUNNABLE; CloseRegionProcedure 34bfc030588548a8db0ed867f696ebe1, server=jenkins-hbase9.apache.org,43635,1689159491271}] 2023-07-12 10:58:14,798 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=30a8043395e1b387772cd1ac0120a878, UNASSIGN 2023-07-12 10:58:14,798 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=61, state=RUNNABLE; CloseRegionProcedure aa4a97a87c6fc1d33907d2bfca429f6f, server=jenkins-hbase9.apache.org,43117,1689159488336}] 2023-07-12 10:58:14,799 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=59, state=RUNNABLE; CloseRegionProcedure 40d01a652659184971d17fc8e26316ec, server=jenkins-hbase9.apache.org,45597,1689159484713}] 2023-07-12 10:58:14,800 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=30a8043395e1b387772cd1ac0120a878, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:14,800 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689159493545.30a8043395e1b387772cd1ac0120a878.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494800"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494800"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494800"}]},"ts":"1689159494800"} 2023-07-12 10:58:14,801 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c4c45abdc78a3ec97cd80e52c0a7f6ec, UNASSIGN 2023-07-12 10:58:14,802 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=56, state=RUNNABLE; CloseRegionProcedure 30a8043395e1b387772cd1ac0120a878, server=jenkins-hbase9.apache.org,42501,1689159484335}] 2023-07-12 10:58:14,803 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=c4c45abdc78a3ec97cd80e52c0a7f6ec, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:14,803 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494803"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494803"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494803"}]},"ts":"1689159494803"} 2023-07-12 10:58:14,804 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a3deacea066be39695a4220eb45806ba, UNASSIGN 2023-07-12 10:58:14,804 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33b5d590762fb3bdcf16b4383581397, UNASSIGN 2023-07-12 10:58:14,805 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=55, state=RUNNABLE; CloseRegionProcedure c4c45abdc78a3ec97cd80e52c0a7f6ec, server=jenkins-hbase9.apache.org,45597,1689159484713}] 2023-07-12 10:58:14,805 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=51, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2b5f3d48acb383a31b5a40a5ac6a05da, UNASSIGN 2023-07-12 10:58:14,805 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=a3deacea066be39695a4220eb45806ba, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:14,805 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494805"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494805"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494805"}]},"ts":"1689159494805"} 2023-07-12 10:58:14,805 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=f33b5d590762fb3bdcf16b4383581397, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:14,806 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689159493545.f33b5d590762fb3bdcf16b4383581397.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494805"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494805"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494805"}]},"ts":"1689159494805"} 2023-07-12 10:58:14,807 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=2b5f3d48acb383a31b5a40a5ac6a05da, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:14,807 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494807"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494807"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494807"}]},"ts":"1689159494807"} 2023-07-12 10:58:14,808 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=54, state=RUNNABLE; CloseRegionProcedure a3deacea066be39695a4220eb45806ba, server=jenkins-hbase9.apache.org,43635,1689159491271}] 2023-07-12 10:58:14,809 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=53, state=RUNNABLE; CloseRegionProcedure f33b5d590762fb3bdcf16b4383581397, server=jenkins-hbase9.apache.org,43635,1689159491271}] 2023-07-12 10:58:14,811 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=52, state=RUNNABLE; CloseRegionProcedure 2b5f3d48acb383a31b5a40a5ac6a05da, server=jenkins-hbase9.apache.org,43117,1689159488336}] 2023-07-12 10:58:14,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=51 2023-07-12 10:58:14,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 30a8043395e1b387772cd1ac0120a878 2023-07-12 10:58:14,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 30a8043395e1b387772cd1ac0120a878, disabling compactions & flushes 2023-07-12 10:58:14,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878. 2023-07-12 10:58:14,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878. 2023-07-12 10:58:14,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878. after waiting 0 ms 2023-07-12 10:58:14,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878. 2023-07-12 10:58:14,950 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 2b5f3d48acb383a31b5a40a5ac6a05da 2023-07-12 10:58:14,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 2b5f3d48acb383a31b5a40a5ac6a05da, disabling compactions & flushes 2023-07-12 10:58:14,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da. 2023-07-12 10:58:14,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da. 2023-07-12 10:58:14,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da. after waiting 0 ms 2023-07-12 10:58:14,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da. 2023-07-12 10:58:14,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close f33b5d590762fb3bdcf16b4383581397 2023-07-12 10:58:14,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing f33b5d590762fb3bdcf16b4383581397, disabling compactions & flushes 2023-07-12 10:58:14,953 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397. 2023-07-12 10:58:14,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397. 2023-07-12 10:58:14,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397. after waiting 0 ms 2023-07-12 10:58:14,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397. 2023-07-12 10:58:14,953 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 40d01a652659184971d17fc8e26316ec 2023-07-12 10:58:14,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 40d01a652659184971d17fc8e26316ec, disabling compactions & flushes 2023-07-12 10:58:14,954 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec. 2023-07-12 10:58:14,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec. 2023-07-12 10:58:14,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec. after waiting 0 ms 2023-07-12 10:58:14,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec. 2023-07-12 10:58:14,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:14,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878. 2023-07-12 10:58:14,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 30a8043395e1b387772cd1ac0120a878: 2023-07-12 10:58:14,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:14,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da. 2023-07-12 10:58:14,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 2b5f3d48acb383a31b5a40a5ac6a05da: 2023-07-12 10:58:14,962 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 30a8043395e1b387772cd1ac0120a878 2023-07-12 10:58:14,962 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 045026e1d038c146082663535dce70e5 2023-07-12 10:58:14,963 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=30a8043395e1b387772cd1ac0120a878, regionState=CLOSED 2023-07-12 10:58:14,963 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689159493545.30a8043395e1b387772cd1ac0120a878.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494963"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494963"}]},"ts":"1689159494963"} 2023-07-12 10:58:14,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 2b5f3d48acb383a31b5a40a5ac6a05da 2023-07-12 10:58:14,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close aa4a97a87c6fc1d33907d2bfca429f6f 2023-07-12 10:58:14,968 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=2b5f3d48acb383a31b5a40a5ac6a05da, regionState=CLOSED 2023-07-12 10:58:14,968 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494967"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494967"}]},"ts":"1689159494967"} 2023-07-12 10:58:14,971 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=56 2023-07-12 10:58:14,971 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=56, state=SUCCESS; CloseRegionProcedure 30a8043395e1b387772cd1ac0120a878, server=jenkins-hbase9.apache.org,42501,1689159484335 in 164 msec 2023-07-12 10:58:14,972 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=52 2023-07-12 10:58:14,972 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=52, state=SUCCESS; CloseRegionProcedure 2b5f3d48acb383a31b5a40a5ac6a05da, server=jenkins-hbase9.apache.org,43117,1689159488336 in 158 msec 2023-07-12 10:58:14,974 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=51, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=30a8043395e1b387772cd1ac0120a878, UNASSIGN in 183 msec 2023-07-12 10:58:14,974 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=51, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2b5f3d48acb383a31b5a40a5ac6a05da, UNASSIGN in 187 msec 2023-07-12 10:58:14,975 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:14,975 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing aa4a97a87c6fc1d33907d2bfca429f6f, disabling compactions & flushes 2023-07-12 10:58:14,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f. 2023-07-12 10:58:14,975 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 045026e1d038c146082663535dce70e5, disabling compactions & flushes 2023-07-12 10:58:14,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f. 2023-07-12 10:58:14,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5. 2023-07-12 10:58:14,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f. after waiting 0 ms 2023-07-12 10:58:14,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5. 2023-07-12 10:58:14,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f. 2023-07-12 10:58:14,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:14,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5. after waiting 0 ms 2023-07-12 10:58:14,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec. 2023-07-12 10:58:14,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5. 2023-07-12 10:58:14,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 40d01a652659184971d17fc8e26316ec: 2023-07-12 10:58:14,977 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397. 2023-07-12 10:58:14,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for f33b5d590762fb3bdcf16b4383581397: 2023-07-12 10:58:14,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 40d01a652659184971d17fc8e26316ec 2023-07-12 10:58:14,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close c4c45abdc78a3ec97cd80e52c0a7f6ec 2023-07-12 10:58:14,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing c4c45abdc78a3ec97cd80e52c0a7f6ec, disabling compactions & flushes 2023-07-12 10:58:14,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec. 2023-07-12 10:58:14,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec. 2023-07-12 10:58:14,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec. after waiting 0 ms 2023-07-12 10:58:14,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec. 2023-07-12 10:58:14,983 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:14,984 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5. 2023-07-12 10:58:14,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 045026e1d038c146082663535dce70e5: 2023-07-12 10:58:14,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:14,985 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f. 2023-07-12 10:58:14,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for aa4a97a87c6fc1d33907d2bfca429f6f: 2023-07-12 10:58:14,987 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=40d01a652659184971d17fc8e26316ec, regionState=CLOSED 2023-07-12 10:58:14,988 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689159493545.40d01a652659184971d17fc8e26316ec.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494987"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494987"}]},"ts":"1689159494987"} 2023-07-12 10:58:14,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed f33b5d590762fb3bdcf16b4383581397 2023-07-12 10:58:14,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 34bfc030588548a8db0ed867f696ebe1 2023-07-12 10:58:14,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 34bfc030588548a8db0ed867f696ebe1, disabling compactions & flushes 2023-07-12 10:58:14,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1. 2023-07-12 10:58:14,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1. 2023-07-12 10:58:14,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1. after waiting 0 ms 2023-07-12 10:58:14,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1. 2023-07-12 10:58:14,990 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=f33b5d590762fb3bdcf16b4383581397, regionState=CLOSED 2023-07-12 10:58:14,990 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689159493545.f33b5d590762fb3bdcf16b4383581397.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159494990"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494990"}]},"ts":"1689159494990"} 2023-07-12 10:58:14,991 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 045026e1d038c146082663535dce70e5 2023-07-12 10:58:14,993 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=045026e1d038c146082663535dce70e5, regionState=CLOSED 2023-07-12 10:58:14,993 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689159493545.045026e1d038c146082663535dce70e5.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159494993"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494993"}]},"ts":"1689159494993"} 2023-07-12 10:58:14,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed aa4a97a87c6fc1d33907d2bfca429f6f 2023-07-12 10:58:14,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close a9805313c53b56463f2953675bbf0488 2023-07-12 10:58:14,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing a9805313c53b56463f2953675bbf0488, disabling compactions & flushes 2023-07-12 10:58:14,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488. 2023-07-12 10:58:14,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488. 2023-07-12 10:58:14,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488. after waiting 0 ms 2023-07-12 10:58:14,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488. 2023-07-12 10:58:14,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:14,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:14,998 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=aa4a97a87c6fc1d33907d2bfca429f6f, regionState=CLOSED 2023-07-12 10:58:14,998 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159494998"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494998"}]},"ts":"1689159494998"} 2023-07-12 10:58:14,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1. 2023-07-12 10:58:14,999 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 34bfc030588548a8db0ed867f696ebe1: 2023-07-12 10:58:14,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec. 2023-07-12 10:58:14,999 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for c4c45abdc78a3ec97cd80e52c0a7f6ec: 2023-07-12 10:58:15,002 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 34bfc030588548a8db0ed867f696ebe1 2023-07-12 10:58:15,002 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close a3deacea066be39695a4220eb45806ba 2023-07-12 10:58:15,003 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing a3deacea066be39695a4220eb45806ba, disabling compactions & flushes 2023-07-12 10:58:15,003 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba. 2023-07-12 10:58:15,003 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba. 2023-07-12 10:58:15,003 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba. after waiting 0 ms 2023-07-12 10:58:15,003 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba. 2023-07-12 10:58:15,004 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=59 2023-07-12 10:58:15,004 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=59, state=SUCCESS; CloseRegionProcedure 40d01a652659184971d17fc8e26316ec, server=jenkins-hbase9.apache.org,45597,1689159484713 in 193 msec 2023-07-12 10:58:15,005 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:15,005 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=60 2023-07-12 10:58:15,005 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=60, state=SUCCESS; CloseRegionProcedure 045026e1d038c146082663535dce70e5, server=jenkins-hbase9.apache.org,42501,1689159484335 in 201 msec 2023-07-12 10:58:15,005 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488. 2023-07-12 10:58:15,006 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for a9805313c53b56463f2953675bbf0488: 2023-07-12 10:58:15,006 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=53 2023-07-12 10:58:15,006 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=34bfc030588548a8db0ed867f696ebe1, regionState=CLOSED 2023-07-12 10:58:15,006 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=53, state=SUCCESS; CloseRegionProcedure f33b5d590762fb3bdcf16b4383581397, server=jenkins-hbase9.apache.org,43635,1689159491271 in 184 msec 2023-07-12 10:58:15,006 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159495006"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159495006"}]},"ts":"1689159495006"} 2023-07-12 10:58:15,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed c4c45abdc78a3ec97cd80e52c0a7f6ec 2023-07-12 10:58:15,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:15,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba. 2023-07-12 10:58:15,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed a9805313c53b56463f2953675bbf0488 2023-07-12 10:58:15,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for a3deacea066be39695a4220eb45806ba: 2023-07-12 10:58:15,020 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=61 2023-07-12 10:58:15,020 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=61, state=SUCCESS; CloseRegionProcedure aa4a97a87c6fc1d33907d2bfca429f6f, server=jenkins-hbase9.apache.org,43117,1689159488336 in 203 msec 2023-07-12 10:58:15,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=51, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=40d01a652659184971d17fc8e26316ec, UNASSIGN in 217 msec 2023-07-12 10:58:15,021 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=51, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=045026e1d038c146082663535dce70e5, UNASSIGN in 217 msec 2023-07-12 10:58:15,021 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=51, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33b5d590762fb3bdcf16b4383581397, UNASSIGN in 220 msec 2023-07-12 10:58:15,022 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=a9805313c53b56463f2953675bbf0488, regionState=CLOSED 2023-07-12 10:58:15,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed a3deacea066be39695a4220eb45806ba 2023-07-12 10:58:15,022 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=58 2023-07-12 10:58:15,022 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=58, state=SUCCESS; CloseRegionProcedure 34bfc030588548a8db0ed867f696ebe1, server=jenkins-hbase9.apache.org,43635,1689159491271 in 212 msec 2023-07-12 10:58:15,022 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689159493545.a9805313c53b56463f2953675bbf0488.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159495022"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159495022"}]},"ts":"1689159495022"} 2023-07-12 10:58:15,022 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=c4c45abdc78a3ec97cd80e52c0a7f6ec, regionState=CLOSED 2023-07-12 10:58:15,023 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159495022"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159495022"}]},"ts":"1689159495022"} 2023-07-12 10:58:15,024 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=51, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aa4a97a87c6fc1d33907d2bfca429f6f, UNASSIGN in 233 msec 2023-07-12 10:58:15,024 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=a3deacea066be39695a4220eb45806ba, regionState=CLOSED 2023-07-12 10:58:15,024 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159495024"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159495024"}]},"ts":"1689159495024"} 2023-07-12 10:58:15,026 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=51, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=34bfc030588548a8db0ed867f696ebe1, UNASSIGN in 234 msec 2023-07-12 10:58:15,029 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=57 2023-07-12 10:58:15,029 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=57, state=SUCCESS; CloseRegionProcedure a9805313c53b56463f2953675bbf0488, server=jenkins-hbase9.apache.org,43117,1689159488336 in 228 msec 2023-07-12 10:58:15,031 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=55 2023-07-12 10:58:15,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=54 2023-07-12 10:58:15,031 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=51, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a9805313c53b56463f2953675bbf0488, UNASSIGN in 241 msec 2023-07-12 10:58:15,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=54, state=SUCCESS; CloseRegionProcedure a3deacea066be39695a4220eb45806ba, server=jenkins-hbase9.apache.org,43635,1689159491271 in 219 msec 2023-07-12 10:58:15,031 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=55, state=SUCCESS; CloseRegionProcedure c4c45abdc78a3ec97cd80e52c0a7f6ec, server=jenkins-hbase9.apache.org,45597,1689159484713 in 221 msec 2023-07-12 10:58:15,033 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=51, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a3deacea066be39695a4220eb45806ba, UNASSIGN in 243 msec 2023-07-12 10:58:15,034 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=51 2023-07-12 10:58:15,034 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=51, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c4c45abdc78a3ec97cd80e52c0a7f6ec, UNASSIGN in 243 msec 2023-07-12 10:58:15,035 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159495035"}]},"ts":"1689159495035"} 2023-07-12 10:58:15,037 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLED in hbase:meta 2023-07-12 10:58:15,039 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testCreateMultiRegion to state=DISABLED 2023-07-12 10:58:15,042 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=51, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion in 264 msec 2023-07-12 10:58:15,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=51 2023-07-12 10:58:15,083 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateMultiRegion, procId: 51 completed 2023-07-12 10:58:15,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete Group_testCreateMultiRegion 2023-07-12 10:58:15,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=72, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:15,088 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=72, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:15,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateMultiRegion' from rsgroup 'default' 2023-07-12 10:58:15,089 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=72, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:15,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:15,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=72 2023-07-12 10:58:15,107 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da 2023-07-12 10:58:15,107 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397 2023-07-12 10:58:15,108 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba 2023-07-12 10:58:15,108 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec 2023-07-12 10:58:15,108 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488 2023-07-12 10:58:15,108 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec 2023-07-12 10:58:15,108 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1 2023-07-12 10:58:15,108 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878 2023-07-12 10:58:15,112 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1/f, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1/recovered.edits] 2023-07-12 10:58:15,113 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488/f, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488/recovered.edits] 2023-07-12 10:58:15,113 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec/f, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec/recovered.edits] 2023-07-12 10:58:15,115 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba/f, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba/recovered.edits] 2023-07-12 10:58:15,115 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da/f, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da/recovered.edits] 2023-07-12 10:58:15,115 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878/f, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878/recovered.edits] 2023-07-12 10:58:15,116 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397/f, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397/recovered.edits] 2023-07-12 10:58:15,116 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec/f, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec/recovered.edits] 2023-07-12 10:58:15,141 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488/recovered.edits/4.seqid 2023-07-12 10:58:15,144 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878/recovered.edits/4.seqid 2023-07-12 10:58:15,144 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec/recovered.edits/4.seqid 2023-07-12 10:58:15,145 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1/recovered.edits/4.seqid 2023-07-12 10:58:15,146 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba/recovered.edits/4.seqid 2023-07-12 10:58:15,147 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a9805313c53b56463f2953675bbf0488 2023-07-12 10:58:15,147 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5 2023-07-12 10:58:15,147 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da/recovered.edits/4.seqid 2023-07-12 10:58:15,148 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/c4c45abdc78a3ec97cd80e52c0a7f6ec 2023-07-12 10:58:15,148 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f 2023-07-12 10:58:15,148 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/30a8043395e1b387772cd1ac0120a878 2023-07-12 10:58:15,149 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/34bfc030588548a8db0ed867f696ebe1 2023-07-12 10:58:15,150 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec/recovered.edits/4.seqid 2023-07-12 10:58:15,150 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/a3deacea066be39695a4220eb45806ba 2023-07-12 10:58:15,151 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/2b5f3d48acb383a31b5a40a5ac6a05da 2023-07-12 10:58:15,151 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/40d01a652659184971d17fc8e26316ec 2023-07-12 10:58:15,152 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397/recovered.edits/4.seqid 2023-07-12 10:58:15,153 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5/f, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5/recovered.edits] 2023-07-12 10:58:15,154 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/f33b5d590762fb3bdcf16b4383581397 2023-07-12 10:58:15,155 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f/f, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f/recovered.edits] 2023-07-12 10:58:15,165 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5/recovered.edits/4.seqid 2023-07-12 10:58:15,166 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f/recovered.edits/4.seqid 2023-07-12 10:58:15,166 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/045026e1d038c146082663535dce70e5 2023-07-12 10:58:15,167 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateMultiRegion/aa4a97a87c6fc1d33907d2bfca429f6f 2023-07-12 10:58:15,167 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-12 10:58:15,172 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=72, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:15,177 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 10 rows of Group_testCreateMultiRegion from hbase:meta 2023-07-12 10:58:15,181 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateMultiRegion' descriptor. 2023-07-12 10:58:15,183 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=72, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:15,183 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateMultiRegion' from region states. 2023-07-12 10:58:15,183 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159495183"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:15,184 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689159493545.f33b5d590762fb3bdcf16b4383581397.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159495183"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:15,184 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159495183"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:15,184 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159495183"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:15,184 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689159493545.30a8043395e1b387772cd1ac0120a878.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159495183"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:15,184 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689159493545.a9805313c53b56463f2953675bbf0488.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159495183"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:15,184 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159495183"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:15,184 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689159493545.40d01a652659184971d17fc8e26316ec.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159495183"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:15,184 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689159493545.045026e1d038c146082663535dce70e5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159495183"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:15,184 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159495183"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:15,188 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 10 regions from META 2023-07-12 10:58:15,188 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 2b5f3d48acb383a31b5a40a5ac6a05da, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689159493545.2b5f3d48acb383a31b5a40a5ac6a05da.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, {ENCODED => f33b5d590762fb3bdcf16b4383581397, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1689159493545.f33b5d590762fb3bdcf16b4383581397.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, {ENCODED => a3deacea066be39695a4220eb45806ba, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1689159493545.a3deacea066be39695a4220eb45806ba.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, {ENCODED => c4c45abdc78a3ec97cd80e52c0a7f6ec, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1689159493545.c4c45abdc78a3ec97cd80e52c0a7f6ec.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, {ENCODED => 30a8043395e1b387772cd1ac0120a878, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689159493545.30a8043395e1b387772cd1ac0120a878.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, {ENCODED => a9805313c53b56463f2953675bbf0488, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689159493545.a9805313c53b56463f2953675bbf0488.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, {ENCODED => 34bfc030588548a8db0ed867f696ebe1, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689159493545.34bfc030588548a8db0ed867f696ebe1.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, {ENCODED => 40d01a652659184971d17fc8e26316ec, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689159493545.40d01a652659184971d17fc8e26316ec.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, {ENCODED => 045026e1d038c146082663535dce70e5, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689159493545.045026e1d038c146082663535dce70e5.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, {ENCODED => aa4a97a87c6fc1d33907d2bfca429f6f, NAME => 'Group_testCreateMultiRegion,,1689159493545.aa4a97a87c6fc1d33907d2bfca429f6f.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}] 2023-07-12 10:58:15,188 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateMultiRegion' as deleted. 2023-07-12 10:58:15,188 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159495188"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:15,190 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateMultiRegion state from META 2023-07-12 10:58:15,193 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=72, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:15,195 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=72, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion in 109 msec 2023-07-12 10:58:15,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=72 2023-07-12 10:58:15,196 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateMultiRegion, procId: 72 completed 2023-07-12 10:58:15,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:15,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:15,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:15,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:15,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:15,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:15,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:15,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:15,222 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:15,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:15,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:15,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:15,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:15,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:15,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160695240, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:15,242 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:15,244 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:15,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,245 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:15,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:15,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,267 INFO [Listener at localhost/44831] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=503 (was 500) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1281706449_17 at /127.0.0.1:33862 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1616880846_17 at /127.0.0.1:37478 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xddfa172-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=803 (was 778) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=322 (was 341), ProcessCount=172 (was 172), AvailableMemoryMB=6031 (was 6106) 2023-07-12 10:58:15,267 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-12 10:58:15,288 INFO [Listener at localhost/44831] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=503, OpenFileDescriptor=803, MaxFileDescriptor=60000, SystemLoadAverage=322, ProcessCount=172, AvailableMemoryMB=6039 2023-07-12 10:58:15,288 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-12 10:58:15,288 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(132): testNamespaceCreateAndAssign 2023-07-12 10:58:15,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:15,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:15,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:15,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:15,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:15,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:15,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:15,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:15,312 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:15,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:15,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:15,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:15,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:15,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:15,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 275 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160695334, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:15,335 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:15,337 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:15,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,338 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:15,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:15,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,340 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBasics(118): testNamespaceCreateAndAssign 2023-07-12 10:58:15,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:15,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup appInfo 2023-07-12 10:58:15,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-12 10:58:15,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:15,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:15,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:42501] to rsgroup appInfo 2023-07-12 10:58:15,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-12 10:58:15,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:15,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(238): Moving server region 0832c48321f808d3b4d6fb68605b1448, which do not belong to RSGroup appInfo 2023-07-12 10:58:15,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:15,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:15,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:15,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:15,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:15,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE 2023-07-12 10:58:15,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(238): Moving server region e5addb24bba6e8be9d4cddc12a45ff25, which do not belong to RSGroup appInfo 2023-07-12 10:58:15,378 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE 2023-07-12 10:58:15,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:15,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:15,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:15,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:15,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:15,379 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:15,380 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159495379"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159495379"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159495379"}]},"ts":"1689159495379"} 2023-07-12 10:58:15,382 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=73, state=RUNNABLE; CloseRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,42501,1689159484335}] 2023-07-12 10:58:15,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE 2023-07-12 10:58:15,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-12 10:58:15,384 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=74, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE 2023-07-12 10:58:15,385 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:15,385 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159495385"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159495385"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159495385"}]},"ts":"1689159495385"} 2023-07-12 10:58:15,387 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=74, state=RUNNABLE; CloseRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,42501,1689159484335}] 2023-07-12 10:58:15,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:15,538 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 0832c48321f808d3b4d6fb68605b1448, disabling compactions & flushes 2023-07-12 10:58:15,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:15,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:15,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. after waiting 0 ms 2023-07-12 10:58:15,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:15,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 0832c48321f808d3b4d6fb68605b1448 1/1 column families, dataSize=5.69 KB heapSize=9.36 KB 2023-07-12 10:58:15,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.69 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/34bd11963bd04d34b7e2994e45ec4653 2023-07-12 10:58:15,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 34bd11963bd04d34b7e2994e45ec4653 2023-07-12 10:58:15,567 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/34bd11963bd04d34b7e2994e45ec4653 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/34bd11963bd04d34b7e2994e45ec4653 2023-07-12 10:58:15,574 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 34bd11963bd04d34b7e2994e45ec4653 2023-07-12 10:58:15,574 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/34bd11963bd04d34b7e2994e45ec4653, entries=10, sequenceid=37, filesize=5.4 K 2023-07-12 10:58:15,576 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.69 KB/5822, heapSize ~9.34 KB/9568, currentSize=0 B/0 for 0832c48321f808d3b4d6fb68605b1448 in 37ms, sequenceid=37, compaction requested=false 2023-07-12 10:58:15,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/recovered.edits/40.seqid, newMaxSeqId=40, maxSeqId=12 2023-07-12 10:58:15,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:15,589 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:15,589 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:15,589 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 0832c48321f808d3b4d6fb68605b1448 move to jenkins-hbase9.apache.org,45597,1689159484713 record at close sequenceid=37 2023-07-12 10:58:15,591 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:15,592 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:15,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e5addb24bba6e8be9d4cddc12a45ff25, disabling compactions & flushes 2023-07-12 10:58:15,592 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:15,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:15,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. after waiting 0 ms 2023-07-12 10:58:15,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:15,592 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing e5addb24bba6e8be9d4cddc12a45ff25 1/1 column families, dataSize=72 B heapSize=400 B 2023-07-12 10:58:15,593 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=CLOSED 2023-07-12 10:58:15,593 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159495593"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159495593"}]},"ts":"1689159495593"} 2023-07-12 10:58:15,597 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=73 2023-07-12 10:58:15,597 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=73, state=SUCCESS; CloseRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,42501,1689159484335 in 212 msec 2023-07-12 10:58:15,598 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,45597,1689159484713; forceNewPlan=false, retain=false 2023-07-12 10:58:15,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=72 B at sequenceid=13 (bloomFilter=true), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/.tmp/info/60a7cefcb3894d8ba483b968c9da2362 2023-07-12 10:58:15,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/.tmp/info/60a7cefcb3894d8ba483b968c9da2362 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/60a7cefcb3894d8ba483b968c9da2362 2023-07-12 10:58:15,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/60a7cefcb3894d8ba483b968c9da2362, entries=1, sequenceid=13, filesize=4.8 K 2023-07-12 10:58:15,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~72 B/72, heapSize ~384 B/384, currentSize=0 B/0 for e5addb24bba6e8be9d4cddc12a45ff25 in 30ms, sequenceid=13, compaction requested=false 2023-07-12 10:58:15,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/recovered.edits/16.seqid, newMaxSeqId=16, maxSeqId=9 2023-07-12 10:58:15,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:15,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:15,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding e5addb24bba6e8be9d4cddc12a45ff25 move to jenkins-hbase9.apache.org,43635,1689159491271 record at close sequenceid=13 2023-07-12 10:58:15,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:15,632 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=CLOSED 2023-07-12 10:58:15,632 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159495632"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159495632"}]},"ts":"1689159495632"} 2023-07-12 10:58:15,635 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=74 2023-07-12 10:58:15,635 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=74, state=SUCCESS; CloseRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,42501,1689159484335 in 246 msec 2023-07-12 10:58:15,636 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=74, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,43635,1689159491271; forceNewPlan=false, retain=false 2023-07-12 10:58:15,636 INFO [jenkins-hbase9:41017] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 10:58:15,636 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:15,637 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159495636"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159495636"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159495636"}]},"ts":"1689159495636"} 2023-07-12 10:58:15,637 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:15,637 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159495637"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159495637"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159495637"}]},"ts":"1689159495637"} 2023-07-12 10:58:15,638 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=73, state=RUNNABLE; OpenRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,45597,1689159484713}] 2023-07-12 10:58:15,639 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=74, state=RUNNABLE; OpenRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,43635,1689159491271}] 2023-07-12 10:58:15,796 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:15,796 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:15,796 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e5addb24bba6e8be9d4cddc12a45ff25, NAME => 'hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:15,796 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0832c48321f808d3b4d6fb68605b1448, NAME => 'hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:15,796 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:15,796 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:15,796 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. service=MultiRowMutationService 2023-07-12 10:58:15,796 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:15,796 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 10:58:15,796 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:15,796 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:15,796 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:15,797 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:15,797 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:15,797 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:15,798 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:15,798 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:15,799 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info 2023-07-12 10:58:15,799 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m 2023-07-12 10:58:15,799 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info 2023-07-12 10:58:15,799 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m 2023-07-12 10:58:15,800 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0832c48321f808d3b4d6fb68605b1448 columnFamilyName m 2023-07-12 10:58:15,800 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e5addb24bba6e8be9d4cddc12a45ff25 columnFamilyName info 2023-07-12 10:58:15,813 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 34bd11963bd04d34b7e2994e45ec4653 2023-07-12 10:58:15,814 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/34bd11963bd04d34b7e2994e45ec4653 2023-07-12 10:58:15,814 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/60a7cefcb3894d8ba483b968c9da2362 2023-07-12 10:58:15,820 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/7c61e3ca2f7f49229ba8ba16c44c26fc 2023-07-12 10:58:15,821 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(310): Store=e5addb24bba6e8be9d4cddc12a45ff25/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:15,821 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/8fa7cb9488f24a899b6cdde7163b9c4c 2023-07-12 10:58:15,821 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(310): Store=0832c48321f808d3b4d6fb68605b1448/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:15,822 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:15,822 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:15,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:15,824 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:15,827 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:15,827 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:15,828 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e5addb24bba6e8be9d4cddc12a45ff25; next sequenceid=17; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9803544160, jitterRate=-0.08697380125522614}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:15,828 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:15,828 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 0832c48321f808d3b4d6fb68605b1448; next sequenceid=41; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3f2835b0, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:15,828 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:15,829 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25., pid=78, masterSystemTime=1689159495791 2023-07-12 10:58:15,829 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448., pid=77, masterSystemTime=1689159495791 2023-07-12 10:58:15,831 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:15,831 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:15,832 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=OPEN, openSeqNum=41, regionLocation=jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:15,832 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159495832"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159495832"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159495832"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159495832"}]},"ts":"1689159495832"} 2023-07-12 10:58:15,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:15,833 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:15,834 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPEN, openSeqNum=17, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:15,834 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159495833"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159495833"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159495833"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159495833"}]},"ts":"1689159495833"} 2023-07-12 10:58:15,839 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=73 2023-07-12 10:58:15,840 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=73, state=SUCCESS; OpenRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,45597,1689159484713 in 198 msec 2023-07-12 10:58:15,841 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=74 2023-07-12 10:58:15,841 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=73, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE in 465 msec 2023-07-12 10:58:15,841 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=74, state=SUCCESS; OpenRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,43635,1689159491271 in 198 msec 2023-07-12 10:58:15,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE in 462 msec 2023-07-12 10:58:16,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure.ProcedureSyncWait(216): waitFor pid=73 2023-07-12 10:58:16,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,42501,1689159484335] are moved back to default 2023-07-12 10:58:16,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-12 10:58:16,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:16,388 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42501] ipc.CallRunner(144): callId: 14 service: ClientService methodName: Scan size: 136 connection: 172.31.2.10:58776 deadline: 1689159556387, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=45597 startCode=1689159484713. As of locationSeqNum=37. 2023-07-12 10:58:16,490 DEBUG [hconnection-0x324c1766-shared-pool-10] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:16,492 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:40136, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:16,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:16,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:16,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=appInfo 2023-07-12 10:58:16,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:16,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$15(3014): Client=jenkins//172.31.2.10 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'appInfo'} 2023-07-12 10:58:16,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=79, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:16,514 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42501] ipc.CallRunner(144): callId: 197 service: ClientService methodName: Get size: 120 connection: 172.31.2.10:58792 deadline: 1689159556513, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=43635 startCode=1689159491271. As of locationSeqNum=13. 2023-07-12 10:58:16,616 DEBUG [PEWorker-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:16,618 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:36438, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:16,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=79 2023-07-12 10:58:16,626 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:16,630 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=79, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 118 msec 2023-07-12 10:58:16,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=79 2023-07-12 10:58:16,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:16,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=80, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:16,732 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=80, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:16,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "Group_foo" qualifier: "Group_testCreateAndAssign" procId is: 80 2023-07-12 10:58:16,733 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42501] ipc.CallRunner(144): callId: 201 service: ClientService methodName: ExecService size: 537 connection: 172.31.2.10:58792 deadline: 1689159556733, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=45597 startCode=1689159484713. As of locationSeqNum=37. 2023-07-12 10:58:16,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-12 10:58:16,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-12 10:58:17,037 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:17,039 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:40146, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:17,042 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:17,042 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:17,043 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-12 10:58:17,043 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:17,047 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=80, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:17,050 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0 2023-07-12 10:58:17,050 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0 empty. 2023-07-12 10:58:17,051 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0 2023-07-12 10:58:17,051 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-12 10:58:17,070 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_foo/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:17,072 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => c8310935a544427176a391a4d4294db0, NAME => 'Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:17,084 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:17,084 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing c8310935a544427176a391a4d4294db0, disabling compactions & flushes 2023-07-12 10:58:17,085 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0. 2023-07-12 10:58:17,085 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0. 2023-07-12 10:58:17,085 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0. after waiting 0 ms 2023-07-12 10:58:17,085 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0. 2023-07-12 10:58:17,085 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0. 2023-07-12 10:58:17,085 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for c8310935a544427176a391a4d4294db0: 2023-07-12 10:58:17,087 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=80, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:17,088 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689159497088"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159497088"}]},"ts":"1689159497088"} 2023-07-12 10:58:17,089 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:17,090 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=80, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:17,090 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159497090"}]},"ts":"1689159497090"} 2023-07-12 10:58:17,092 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-12 10:58:17,095 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=81, ppid=80, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=c8310935a544427176a391a4d4294db0, ASSIGN}] 2023-07-12 10:58:17,096 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, ppid=80, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=c8310935a544427176a391a4d4294db0, ASSIGN 2023-07-12 10:58:17,097 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=81, ppid=80, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=c8310935a544427176a391a4d4294db0, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,42501,1689159484335; forceNewPlan=false, retain=false 2023-07-12 10:58:17,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-12 10:58:17,249 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=c8310935a544427176a391a4d4294db0, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:17,249 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689159497249"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159497249"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159497249"}]},"ts":"1689159497249"} 2023-07-12 10:58:17,253 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; OpenRegionProcedure c8310935a544427176a391a4d4294db0, server=jenkins-hbase9.apache.org,42501,1689159484335}] 2023-07-12 10:58:17,409 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0. 2023-07-12 10:58:17,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c8310935a544427176a391a4d4294db0, NAME => 'Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:17,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign c8310935a544427176a391a4d4294db0 2023-07-12 10:58:17,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:17,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for c8310935a544427176a391a4d4294db0 2023-07-12 10:58:17,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for c8310935a544427176a391a4d4294db0 2023-07-12 10:58:17,411 INFO [StoreOpener-c8310935a544427176a391a4d4294db0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c8310935a544427176a391a4d4294db0 2023-07-12 10:58:17,412 DEBUG [StoreOpener-c8310935a544427176a391a4d4294db0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0/f 2023-07-12 10:58:17,412 DEBUG [StoreOpener-c8310935a544427176a391a4d4294db0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0/f 2023-07-12 10:58:17,413 INFO [StoreOpener-c8310935a544427176a391a4d4294db0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c8310935a544427176a391a4d4294db0 columnFamilyName f 2023-07-12 10:58:17,413 INFO [StoreOpener-c8310935a544427176a391a4d4294db0-1] regionserver.HStore(310): Store=c8310935a544427176a391a4d4294db0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:17,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0 2023-07-12 10:58:17,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0 2023-07-12 10:58:17,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for c8310935a544427176a391a4d4294db0 2023-07-12 10:58:17,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:17,420 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened c8310935a544427176a391a4d4294db0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10877752640, jitterRate=0.01306965947151184}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:17,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for c8310935a544427176a391a4d4294db0: 2023-07-12 10:58:17,420 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0., pid=82, masterSystemTime=1689159497405 2023-07-12 10:58:17,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0. 2023-07-12 10:58:17,422 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0. 2023-07-12 10:58:17,422 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=c8310935a544427176a391a4d4294db0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:17,423 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689159497422"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159497422"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159497422"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159497422"}]},"ts":"1689159497422"} 2023-07-12 10:58:17,427 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-12 10:58:17,427 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; OpenRegionProcedure c8310935a544427176a391a4d4294db0, server=jenkins-hbase9.apache.org,42501,1689159484335 in 173 msec 2023-07-12 10:58:17,428 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=81, resume processing ppid=80 2023-07-12 10:58:17,428 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=81, ppid=80, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=c8310935a544427176a391a4d4294db0, ASSIGN in 332 msec 2023-07-12 10:58:17,429 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=80, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:17,429 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159497429"}]},"ts":"1689159497429"} 2023-07-12 10:58:17,430 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-12 10:58:17,433 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=80, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:17,434 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=80, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign in 705 msec 2023-07-12 10:58:17,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-12 10:58:17,538 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 80 completed 2023-07-12 10:58:17,538 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:17,544 INFO [Listener at localhost/44831] client.HBaseAdmin$15(890): Started disable of Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:17,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:17,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=83, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:17,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=83 2023-07-12 10:58:17,549 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159497548"}]},"ts":"1689159497548"} 2023-07-12 10:58:17,550 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-12 10:58:17,553 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_foo:Group_testCreateAndAssign to state=DISABLING 2023-07-12 10:58:17,554 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=84, ppid=83, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=c8310935a544427176a391a4d4294db0, UNASSIGN}] 2023-07-12 10:58:17,557 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, ppid=83, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=c8310935a544427176a391a4d4294db0, UNASSIGN 2023-07-12 10:58:17,558 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=c8310935a544427176a391a4d4294db0, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:17,559 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689159497558"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159497558"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159497558"}]},"ts":"1689159497558"} 2023-07-12 10:58:17,562 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure c8310935a544427176a391a4d4294db0, server=jenkins-hbase9.apache.org,42501,1689159484335}] 2023-07-12 10:58:17,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=83 2023-07-12 10:58:17,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close c8310935a544427176a391a4d4294db0 2023-07-12 10:58:17,715 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing c8310935a544427176a391a4d4294db0, disabling compactions & flushes 2023-07-12 10:58:17,715 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0. 2023-07-12 10:58:17,715 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0. 2023-07-12 10:58:17,715 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0. after waiting 0 ms 2023-07-12 10:58:17,715 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0. 2023-07-12 10:58:17,719 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:17,720 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0. 2023-07-12 10:58:17,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for c8310935a544427176a391a4d4294db0: 2023-07-12 10:58:17,722 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed c8310935a544427176a391a4d4294db0 2023-07-12 10:58:17,722 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=c8310935a544427176a391a4d4294db0, regionState=CLOSED 2023-07-12 10:58:17,722 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689159497722"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159497722"}]},"ts":"1689159497722"} 2023-07-12 10:58:17,726 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-12 10:58:17,726 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure c8310935a544427176a391a4d4294db0, server=jenkins-hbase9.apache.org,42501,1689159484335 in 162 msec 2023-07-12 10:58:17,727 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=84, resume processing ppid=83 2023-07-12 10:58:17,728 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=84, ppid=83, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=c8310935a544427176a391a4d4294db0, UNASSIGN in 172 msec 2023-07-12 10:58:17,728 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159497728"}]},"ts":"1689159497728"} 2023-07-12 10:58:17,730 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-12 10:58:17,731 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_foo:Group_testCreateAndAssign to state=DISABLED 2023-07-12 10:58:17,733 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=83, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign in 188 msec 2023-07-12 10:58:17,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=83 2023-07-12 10:58:17,851 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 83 completed 2023-07-12 10:58:17,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:17,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=86, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:17,854 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=86, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:17,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_foo:Group_testCreateAndAssign' from rsgroup 'appInfo' 2023-07-12 10:58:17,855 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=86, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:17,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:17,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:17,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-12 10:58:17,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:17,860 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0 2023-07-12 10:58:17,862 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0/f, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0/recovered.edits] 2023-07-12 10:58:17,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=86 2023-07-12 10:58:17,868 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0/recovered.edits/4.seqid 2023-07-12 10:58:17,869 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_foo/Group_testCreateAndAssign/c8310935a544427176a391a4d4294db0 2023-07-12 10:58:17,869 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-12 10:58:17,874 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=86, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:17,876 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_foo:Group_testCreateAndAssign from hbase:meta 2023-07-12 10:58:17,879 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_foo:Group_testCreateAndAssign' descriptor. 2023-07-12 10:58:17,880 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=86, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:17,880 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_foo:Group_testCreateAndAssign' from region states. 2023-07-12 10:58:17,880 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159497880"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:17,882 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 10:58:17,882 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c8310935a544427176a391a4d4294db0, NAME => 'Group_foo:Group_testCreateAndAssign,,1689159496728.c8310935a544427176a391a4d4294db0.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 10:58:17,882 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_foo:Group_testCreateAndAssign' as deleted. 2023-07-12 10:58:17,882 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159497882"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:17,884 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_foo:Group_testCreateAndAssign state from META 2023-07-12 10:58:17,886 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=86, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:17,887 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=86, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign in 34 msec 2023-07-12 10:58:17,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=86 2023-07-12 10:58:17,964 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 86 completed 2023-07-12 10:58:17,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$17(3086): Client=jenkins//172.31.2.10 delete Group_foo 2023-07-12 10:58:17,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:17,992 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=87, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:17,996 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=87, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:17,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-12 10:58:17,999 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=87, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:18,001 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-12 10:58:18,001 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:18,002 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=87, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:18,003 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=87, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:18,005 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 23 msec 2023-07-12 10:58:18,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-12 10:58:18,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:18,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:18,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:18,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:18,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:18,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:18,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:18,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:18,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:18,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-12 10:58:18,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 10:58:18,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:18,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:18,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:18,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:18,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:42501] to rsgroup default 2023-07-12 10:58:18,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:18,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-12 10:58:18,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:18,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-12 10:58:18,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,42501,1689159484335] are moved back to appInfo 2023-07-12 10:58:18,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-12 10:58:18,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:18,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup appInfo 2023-07-12 10:58:18,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:18,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:18,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:18,128 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:18,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:18,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:18,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:18,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:18,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:18,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:18,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:18,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:18,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:18,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 364 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160698139, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:18,140 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:18,142 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:18,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:18,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:18,143 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:18,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:18,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:18,151 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 10:58:18,167 INFO [Listener at localhost/44831] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=513 (was 503) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1182424193_17 at /127.0.0.1:33862 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1281706449_17 at /127.0.0.1:37478 [Waiting for operation #15] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1182424193_17 at /127.0.0.1:39686 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1420092029_17 at /127.0.0.1:39690 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1420092029_17 at /127.0.0.1:39672 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1182424193_17 at /127.0.0.1:39702 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1182424193_17 at /127.0.0.1:33906 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=817 (was 803) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=322 (was 322), ProcessCount=170 (was 172), AvailableMemoryMB=8047 (was 6039) - AvailableMemoryMB LEAK? - 2023-07-12 10:58:18,168 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-12 10:58:18,189 INFO [Listener at localhost/44831] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=514, OpenFileDescriptor=817, MaxFileDescriptor=60000, SystemLoadAverage=322, ProcessCount=170, AvailableMemoryMB=8046 2023-07-12 10:58:18,189 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-12 10:58:18,189 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(132): testCreateAndDrop 2023-07-12 10:58:18,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:18,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:18,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:18,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:18,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:18,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:18,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:18,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:18,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:18,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:18,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:18,222 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:18,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:18,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:18,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:18,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:18,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:18,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:18,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:18,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:18,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:18,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 392 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160698237, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:18,237 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:18,239 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:18,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:18,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:18,240 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:18,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:18,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:18,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:18,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:18,246 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:18,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndDrop" procId is: 88 2023-07-12 10:58:18,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-12 10:58:18,248 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:18,249 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:18,249 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:18,252 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:18,253 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795 2023-07-12 10:58:18,254 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795 empty. 2023-07-12 10:58:18,254 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795 2023-07-12 10:58:18,254 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-12 10:58:18,278 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:18,279 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 722a1a3493c823ec99137c57c5895795, NAME => 'Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:18,299 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:18,299 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1604): Closing 722a1a3493c823ec99137c57c5895795, disabling compactions & flushes 2023-07-12 10:58:18,299 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795. 2023-07-12 10:58:18,299 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795. 2023-07-12 10:58:18,299 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795. after waiting 0 ms 2023-07-12 10:58:18,299 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795. 2023-07-12 10:58:18,299 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795. 2023-07-12 10:58:18,300 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 722a1a3493c823ec99137c57c5895795: 2023-07-12 10:58:18,303 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:18,304 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689159498304"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159498304"}]},"ts":"1689159498304"} 2023-07-12 10:58:18,305 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:18,306 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:18,306 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159498306"}]},"ts":"1689159498306"} 2023-07-12 10:58:18,308 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLING in hbase:meta 2023-07-12 10:58:18,311 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:18,311 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:18,311 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:18,311 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:18,311 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 10:58:18,311 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:18,311 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=722a1a3493c823ec99137c57c5895795, ASSIGN}] 2023-07-12 10:58:18,313 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=722a1a3493c823ec99137c57c5895795, ASSIGN 2023-07-12 10:58:18,314 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=722a1a3493c823ec99137c57c5895795, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,45597,1689159484713; forceNewPlan=false, retain=false 2023-07-12 10:58:18,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-12 10:58:18,464 INFO [jenkins-hbase9:41017] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:18,466 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=722a1a3493c823ec99137c57c5895795, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:18,466 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689159498466"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159498466"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159498466"}]},"ts":"1689159498466"} 2023-07-12 10:58:18,468 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=90, ppid=89, state=RUNNABLE; OpenRegionProcedure 722a1a3493c823ec99137c57c5895795, server=jenkins-hbase9.apache.org,45597,1689159484713}] 2023-07-12 10:58:18,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-12 10:58:18,623 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795. 2023-07-12 10:58:18,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 722a1a3493c823ec99137c57c5895795, NAME => 'Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:18,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndDrop 722a1a3493c823ec99137c57c5895795 2023-07-12 10:58:18,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:18,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 722a1a3493c823ec99137c57c5895795 2023-07-12 10:58:18,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 722a1a3493c823ec99137c57c5895795 2023-07-12 10:58:18,625 INFO [StoreOpener-722a1a3493c823ec99137c57c5895795-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region 722a1a3493c823ec99137c57c5895795 2023-07-12 10:58:18,627 DEBUG [StoreOpener-722a1a3493c823ec99137c57c5895795-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795/cf 2023-07-12 10:58:18,627 DEBUG [StoreOpener-722a1a3493c823ec99137c57c5895795-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795/cf 2023-07-12 10:58:18,628 INFO [StoreOpener-722a1a3493c823ec99137c57c5895795-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 722a1a3493c823ec99137c57c5895795 columnFamilyName cf 2023-07-12 10:58:18,628 INFO [StoreOpener-722a1a3493c823ec99137c57c5895795-1] regionserver.HStore(310): Store=722a1a3493c823ec99137c57c5895795/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:18,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795 2023-07-12 10:58:18,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795 2023-07-12 10:58:18,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 722a1a3493c823ec99137c57c5895795 2023-07-12 10:58:18,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:18,636 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 722a1a3493c823ec99137c57c5895795; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9656020320, jitterRate=-0.10071302950382233}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:18,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 722a1a3493c823ec99137c57c5895795: 2023-07-12 10:58:18,637 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795., pid=90, masterSystemTime=1689159498619 2023-07-12 10:58:18,638 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795. 2023-07-12 10:58:18,638 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795. 2023-07-12 10:58:18,639 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=722a1a3493c823ec99137c57c5895795, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:18,639 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689159498639"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159498639"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159498639"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159498639"}]},"ts":"1689159498639"} 2023-07-12 10:58:18,642 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=90, resume processing ppid=89 2023-07-12 10:58:18,642 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=90, ppid=89, state=SUCCESS; OpenRegionProcedure 722a1a3493c823ec99137c57c5895795, server=jenkins-hbase9.apache.org,45597,1689159484713 in 172 msec 2023-07-12 10:58:18,644 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-12 10:58:18,644 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=722a1a3493c823ec99137c57c5895795, ASSIGN in 331 msec 2023-07-12 10:58:18,645 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:18,645 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159498645"}]},"ts":"1689159498645"} 2023-07-12 10:58:18,646 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLED in hbase:meta 2023-07-12 10:58:18,648 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:18,650 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=88, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop in 406 msec 2023-07-12 10:58:18,849 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 10:58:18,849 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testCreateAndDrop' 2023-07-12 10:58:18,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-12 10:58:18,851 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndDrop, procId: 88 completed 2023-07-12 10:58:18,851 DEBUG [Listener at localhost/44831] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateAndDrop get assigned. Timeout = 60000ms 2023-07-12 10:58:18,851 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:18,856 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateAndDrop assigned to meta. Checking AM states. 2023-07-12 10:58:18,856 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:18,857 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateAndDrop assigned. 2023-07-12 10:58:18,857 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:18,861 INFO [Listener at localhost/44831] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndDrop 2023-07-12 10:58:18,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testCreateAndDrop 2023-07-12 10:58:18,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:18,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 10:58:18,865 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159498865"}]},"ts":"1689159498865"} 2023-07-12 10:58:18,866 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLING in hbase:meta 2023-07-12 10:58:18,868 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testCreateAndDrop to state=DISABLING 2023-07-12 10:58:18,869 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=722a1a3493c823ec99137c57c5895795, UNASSIGN}] 2023-07-12 10:58:18,870 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=722a1a3493c823ec99137c57c5895795, UNASSIGN 2023-07-12 10:58:18,871 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=722a1a3493c823ec99137c57c5895795, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:18,871 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689159498871"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159498871"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159498871"}]},"ts":"1689159498871"} 2023-07-12 10:58:18,872 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; CloseRegionProcedure 722a1a3493c823ec99137c57c5895795, server=jenkins-hbase9.apache.org,45597,1689159484713}] 2023-07-12 10:58:18,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 10:58:19,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 722a1a3493c823ec99137c57c5895795 2023-07-12 10:58:19,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 722a1a3493c823ec99137c57c5895795, disabling compactions & flushes 2023-07-12 10:58:19,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795. 2023-07-12 10:58:19,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795. 2023-07-12 10:58:19,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795. after waiting 0 ms 2023-07-12 10:58:19,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795. 2023-07-12 10:58:19,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:19,030 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795. 2023-07-12 10:58:19,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 722a1a3493c823ec99137c57c5895795: 2023-07-12 10:58:19,032 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 722a1a3493c823ec99137c57c5895795 2023-07-12 10:58:19,032 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=722a1a3493c823ec99137c57c5895795, regionState=CLOSED 2023-07-12 10:58:19,032 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689159499032"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159499032"}]},"ts":"1689159499032"} 2023-07-12 10:58:19,035 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-12 10:58:19,036 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; CloseRegionProcedure 722a1a3493c823ec99137c57c5895795, server=jenkins-hbase9.apache.org,45597,1689159484713 in 162 msec 2023-07-12 10:58:19,037 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-12 10:58:19,037 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=722a1a3493c823ec99137c57c5895795, UNASSIGN in 166 msec 2023-07-12 10:58:19,038 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159499038"}]},"ts":"1689159499038"} 2023-07-12 10:58:19,039 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLED in hbase:meta 2023-07-12 10:58:19,040 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testCreateAndDrop to state=DISABLED 2023-07-12 10:58:19,042 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop in 180 msec 2023-07-12 10:58:19,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 10:58:19,167 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndDrop, procId: 91 completed 2023-07-12 10:58:19,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete Group_testCreateAndDrop 2023-07-12 10:58:19,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:19,170 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=94, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:19,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndDrop' from rsgroup 'default' 2023-07-12 10:58:19,171 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=94, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:19,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:19,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:19,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:19,175 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795 2023-07-12 10:58:19,176 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795/cf, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795/recovered.edits] 2023-07-12 10:58:19,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 10:58:19,183 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795/recovered.edits/4.seqid 2023-07-12 10:58:19,183 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCreateAndDrop/722a1a3493c823ec99137c57c5895795 2023-07-12 10:58:19,183 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-12 10:58:19,186 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=94, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:19,188 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndDrop from hbase:meta 2023-07-12 10:58:19,189 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndDrop' descriptor. 2023-07-12 10:58:19,190 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=94, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:19,190 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndDrop' from region states. 2023-07-12 10:58:19,190 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159499190"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:19,192 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 10:58:19,192 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 722a1a3493c823ec99137c57c5895795, NAME => 'Group_testCreateAndDrop,,1689159498243.722a1a3493c823ec99137c57c5895795.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 10:58:19,192 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndDrop' as deleted. 2023-07-12 10:58:19,192 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159499192"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:19,193 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndDrop state from META 2023-07-12 10:58:19,195 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=94, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:19,196 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop in 28 msec 2023-07-12 10:58:19,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 10:58:19,281 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndDrop, procId: 94 completed 2023-07-12 10:58:19,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:19,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:19,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:19,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:19,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:19,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:19,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:19,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:19,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:19,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:19,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:19,298 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:19,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:19,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:19,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:19,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:19,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:19,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:19,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:19,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:19,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:19,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 451 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160699315, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:19,316 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:19,320 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:19,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:19,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:19,321 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:19,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:19,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:19,340 INFO [Listener at localhost/44831] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=515 (was 514) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xddfa172-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-413285126_17 at /127.0.0.1:39702 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=817 (was 817), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=312 (was 322), ProcessCount=170 (was 170), AvailableMemoryMB=8042 (was 8046) 2023-07-12 10:58:19,341 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-12 10:58:19,358 INFO [Listener at localhost/44831] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=515, OpenFileDescriptor=817, MaxFileDescriptor=60000, SystemLoadAverage=312, ProcessCount=170, AvailableMemoryMB=8041 2023-07-12 10:58:19,358 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-12 10:58:19,358 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(132): testCloneSnapshot 2023-07-12 10:58:19,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:19,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:19,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:19,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:19,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:19,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:19,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:19,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:19,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:19,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:19,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:19,374 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:19,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:19,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:19,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:19,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:19,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:19,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:19,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:19,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:19,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:19,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 479 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160699384, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:19,385 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:19,387 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:19,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:19,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:19,388 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:19,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:19,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:19,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:19,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=95, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:19,393 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=95, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:19,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "Group_testCloneSnapshot" procId is: 95 2023-07-12 10:58:19,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=95 2023-07-12 10:58:19,395 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:19,395 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:19,395 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:19,407 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=95, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:19,409 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:19,409 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555 empty. 2023-07-12 10:58:19,410 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:19,410 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-12 10:58:19,439 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:19,440 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8fff6e822c369895d5dae9f4dde7d555, NAME => 'Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:19,450 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:19,450 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1604): Closing 8fff6e822c369895d5dae9f4dde7d555, disabling compactions & flushes 2023-07-12 10:58:19,451 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. 2023-07-12 10:58:19,451 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. 2023-07-12 10:58:19,451 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. after waiting 0 ms 2023-07-12 10:58:19,451 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. 2023-07-12 10:58:19,451 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. 2023-07-12 10:58:19,451 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for 8fff6e822c369895d5dae9f4dde7d555: 2023-07-12 10:58:19,453 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=95, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:19,454 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689159499454"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159499454"}]},"ts":"1689159499454"} 2023-07-12 10:58:19,455 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:19,456 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=95, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:19,456 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159499456"}]},"ts":"1689159499456"} 2023-07-12 10:58:19,457 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLING in hbase:meta 2023-07-12 10:58:19,460 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:19,460 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:19,460 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:19,460 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:19,460 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 10:58:19,460 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:19,460 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=8fff6e822c369895d5dae9f4dde7d555, ASSIGN}] 2023-07-12 10:58:19,462 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=96, ppid=95, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=8fff6e822c369895d5dae9f4dde7d555, ASSIGN 2023-07-12 10:58:19,463 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=96, ppid=95, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=8fff6e822c369895d5dae9f4dde7d555, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43117,1689159488336; forceNewPlan=false, retain=false 2023-07-12 10:58:19,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=95 2023-07-12 10:58:19,613 INFO [jenkins-hbase9:41017] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:19,615 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=96 updating hbase:meta row=8fff6e822c369895d5dae9f4dde7d555, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:19,615 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689159499615"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159499615"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159499615"}]},"ts":"1689159499615"} 2023-07-12 10:58:19,616 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=97, ppid=96, state=RUNNABLE; OpenRegionProcedure 8fff6e822c369895d5dae9f4dde7d555, server=jenkins-hbase9.apache.org,43117,1689159488336}] 2023-07-12 10:58:19,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=95 2023-07-12 10:58:19,772 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. 2023-07-12 10:58:19,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8fff6e822c369895d5dae9f4dde7d555, NAME => 'Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:19,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot 8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:19,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:19,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:19,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:19,774 INFO [StoreOpener-8fff6e822c369895d5dae9f4dde7d555-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region 8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:19,775 DEBUG [StoreOpener-8fff6e822c369895d5dae9f4dde7d555-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555/test 2023-07-12 10:58:19,775 DEBUG [StoreOpener-8fff6e822c369895d5dae9f4dde7d555-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555/test 2023-07-12 10:58:19,776 INFO [StoreOpener-8fff6e822c369895d5dae9f4dde7d555-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8fff6e822c369895d5dae9f4dde7d555 columnFamilyName test 2023-07-12 10:58:19,776 INFO [StoreOpener-8fff6e822c369895d5dae9f4dde7d555-1] regionserver.HStore(310): Store=8fff6e822c369895d5dae9f4dde7d555/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:19,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:19,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:19,780 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:19,782 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:19,782 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 8fff6e822c369895d5dae9f4dde7d555; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11734455840, jitterRate=0.09285636246204376}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:19,782 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 8fff6e822c369895d5dae9f4dde7d555: 2023-07-12 10:58:19,783 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555., pid=97, masterSystemTime=1689159499768 2023-07-12 10:58:19,784 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. 2023-07-12 10:58:19,784 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. 2023-07-12 10:58:19,784 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=96 updating hbase:meta row=8fff6e822c369895d5dae9f4dde7d555, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:19,785 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689159499784"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159499784"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159499784"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159499784"}]},"ts":"1689159499784"} 2023-07-12 10:58:19,788 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=97, resume processing ppid=96 2023-07-12 10:58:19,788 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=97, ppid=96, state=SUCCESS; OpenRegionProcedure 8fff6e822c369895d5dae9f4dde7d555, server=jenkins-hbase9.apache.org,43117,1689159488336 in 170 msec 2023-07-12 10:58:19,789 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-12 10:58:19,789 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=8fff6e822c369895d5dae9f4dde7d555, ASSIGN in 328 msec 2023-07-12 10:58:19,790 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=95, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:19,790 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159499790"}]},"ts":"1689159499790"} 2023-07-12 10:58:19,791 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLED in hbase:meta 2023-07-12 10:58:19,793 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=95, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:19,795 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=95, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot in 403 msec 2023-07-12 10:58:19,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=95 2023-07-12 10:58:19,997 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCloneSnapshot, procId: 95 completed 2023-07-12 10:58:19,997 DEBUG [Listener at localhost/44831] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCloneSnapshot get assigned. Timeout = 60000ms 2023-07-12 10:58:19,997 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:20,001 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(3484): All regions for table Group_testCloneSnapshot assigned to meta. Checking AM states. 2023-07-12 10:58:20,002 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:20,002 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(3504): All regions for table Group_testCloneSnapshot assigned. 2023-07-12 10:58:20,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1583): Client=jenkins//172.31.2.10 snapshot request for:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-12 10:58:20,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] snapshot.SnapshotDescriptionUtils(316): Creation time not specified, setting to:1689159500013 (current time:1689159500013). 2023-07-12 10:58:20,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] snapshot.SnapshotDescriptionUtils(332): Snapshot current TTL value: 0 resetting it to default value: 0 2023-07-12 10:58:20,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] zookeeper.ReadOnlyZKClient(139): Connect 0x241ff266 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:20,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@35e69456, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:20,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:20,025 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:59334, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:20,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x241ff266 to 127.0.0.1:49301 2023-07-12 10:58:20,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:20,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] snapshot.SnapshotManager(601): No existing snapshot, attempting snapshot... 2023-07-12 10:58:20,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] snapshot.SnapshotManager(648): Table enabled, starting distributed snapshots for { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-12 10:58:20,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=98, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-12 10:58:20,048 DEBUG [PEWorker-5] locking.LockProcedure(309): LOCKED pid=98, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-12 10:58:20,048 INFO [PEWorker-5] procedure2.TimeoutExecutorThread(81): ADDED pid=98, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE; timeout=600000, timestamp=1689160100048 2023-07-12 10:58:20,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] snapshot.SnapshotManager(653): Started snapshot: { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-12 10:58:20,049 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0-0] snapshot.TakeSnapshotHandler(174): Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot 2023-07-12 10:58:20,050 DEBUG [PEWorker-2] locking.LockProcedure(242): UNLOCKED pid=98, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-12 10:58:20,051 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0-0] procedure2.ProcedureExecutor(1029): Stored pid=99, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-12 10:58:20,052 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=98, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE in 10 msec 2023-07-12 10:58:20,053 DEBUG [PEWorker-2] locking.LockProcedure(309): LOCKED pid=99, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-12 10:58:20,053 INFO [PEWorker-2] procedure2.TimeoutExecutorThread(81): ADDED pid=99, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED; timeout=600000, timestamp=1689160100053 2023-07-12 10:58:20,055 DEBUG [Listener at localhost/44831] client.HBaseAdmin(2418): Waiting a max of 300000 ms for snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }'' to complete. (max 20000 ms per retry) 2023-07-12 10:58:20,055 DEBUG [Listener at localhost/44831] client.HBaseAdmin(2428): (#1) Sleeping: 100ms while waiting for snapshot completion. 2023-07-12 10:58:20,077 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0-0] procedure.ProcedureCoordinator(165): Submitting procedure Group_testCloneSnapshot_snap 2023-07-12 10:58:20,078 INFO [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'Group_testCloneSnapshot_snap' 2023-07-12 10:58:20,078 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-12 10:58:20,078 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'Group_testCloneSnapshot_snap' starting 'acquire' 2023-07-12 10:58:20,078 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'Group_testCloneSnapshot_snap', kicking off acquire phase on members. 2023-07-12 10:58:20,079 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,079 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,081 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-12 10:58:20,081 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-12 10:58:20,081 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,081 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-12 10:58:20,081 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-12 10:58:20,081 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-12 10:58:20,081 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-12 10:58:20,081 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-12 10:58:20,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:20,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:20,082 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:20,081 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-12 10:58:20,082 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-07-12 10:58:20,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:20,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,082 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,082 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,082 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,083 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-12 10:58:20,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-12 10:58:20,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-12 10:58:20,083 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-12 10:58:20,083 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-12 10:58:20,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-12 10:58:20,083 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-12 10:58:20,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,083 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-12 10:58:20,085 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-12 10:58:20,085 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-12 10:58:20,085 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-12 10:58:20,086 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-12 10:58:20,086 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-12 10:58:20,086 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-12 10:58:20,086 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-12 10:58:20,086 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-12 10:58:20,087 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-12 10:58:20,086 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-12 10:58:20,086 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-12 10:58:20,087 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-12 10:58:20,087 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-12 10:58:20,087 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-12 10:58:20,087 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase9.apache.org,45597,1689159484713' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-12 10:58:20,087 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-12 10:58:20,087 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase9.apache.org,42501,1689159484335' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-12 10:58:20,088 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-12 10:58:20,088 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-12 10:58:20,088 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase9.apache.org,43635,1689159491271' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-12 10:58:20,087 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-12 10:58:20,089 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-12 10:58:20,089 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-12 10:58:20,089 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase9.apache.org,43117,1689159488336' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-12 10:58:20,090 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,094 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,094 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,094 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,094 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-12 10:58:20,094 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,094 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-12 10:58:20,095 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,095 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,095 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,096 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-12 10:58:20,096 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,096 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-12 10:58:20,096 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-12 10:58:20,096 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,096 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-12 10:58:20,096 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-12 10:58:20,096 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-12 10:58:20,097 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-12 10:58:20,097 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:20,097 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,097 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:20,098 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:20,098 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-12 10:58:20,098 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase9.apache.org,43117,1689159488336' joining acquired barrier for procedure 'Group_testCloneSnapshot_snap' on coordinator 2023-07-12 10:58:20,098 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'Group_testCloneSnapshot_snap' starting 'in-barrier' execution. 2023-07-12 10:58:20,098 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@3fc2090d[Count = 0] remaining members to acquire global barrier 2023-07-12 10:58:20,098 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,100 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,100 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,100 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,100 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,100 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,100 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,100 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,100 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-07-12 10:58:20,100 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,100 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,100 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-12 10:58:20,100 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-12 10:58:20,100 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-12 10:58:20,100 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,100 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase9.apache.org,45597,1689159484713' in zk 2023-07-12 10:58:20,100 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,100 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-12 10:58:20,100 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-12 10:58:20,100 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,100 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase9.apache.org,42501,1689159484335' in zk 2023-07-12 10:58:20,100 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,101 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-12 10:58:20,101 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-12 10:58:20,101 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase9.apache.org,43635,1689159491271' in zk 2023-07-12 10:58:20,101 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] snapshot.FlushSnapshotSubprocedure(170): Flush Snapshot Tasks submitted for 1 regions 2023-07-12 10:58:20,101 DEBUG [rs(jenkins-hbase9.apache.org,43117,1689159488336)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Starting snapshot operation on Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. 2023-07-12 10:58:20,101 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(301): Waiting for local region snapshots to finish. 2023-07-12 10:58:20,101 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-12 10:58:20,102 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-12 10:58:20,102 DEBUG [member: 'jenkins-hbase9.apache.org,45597,1689159484713' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-12 10:58:20,102 DEBUG [rs(jenkins-hbase9.apache.org,43117,1689159488336)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(110): Flush Snapshotting region Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. started... 2023-07-12 10:58:20,102 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-12 10:58:20,103 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-12 10:58:20,103 DEBUG [member: 'jenkins-hbase9.apache.org,42501,1689159484335' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-12 10:58:20,104 DEBUG [rs(jenkins-hbase9.apache.org,43117,1689159488336)-snapshot-pool-0] regionserver.HRegion(2446): Flush status journal for 8fff6e822c369895d5dae9f4dde7d555: 2023-07-12 10:58:20,105 DEBUG [rs(jenkins-hbase9.apache.org,43117,1689159488336)-snapshot-pool-0] snapshot.SnapshotManifest(238): Storing 'Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555.' region-info for snapshot=Group_testCloneSnapshot_snap 2023-07-12 10:58:20,105 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-12 10:58:20,105 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-12 10:58:20,105 DEBUG [member: 'jenkins-hbase9.apache.org,43635,1689159491271' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-12 10:58:20,110 DEBUG [rs(jenkins-hbase9.apache.org,43117,1689159488336)-snapshot-pool-0] snapshot.SnapshotManifest(243): Creating references for hfiles 2023-07-12 10:58:20,114 DEBUG [rs(jenkins-hbase9.apache.org,43117,1689159488336)-snapshot-pool-0] snapshot.SnapshotManifest(253): Adding snapshot references for [] hfiles 2023-07-12 10:58:20,124 DEBUG [rs(jenkins-hbase9.apache.org,43117,1689159488336)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(137): ... Flush Snapshotting region Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. completed. 2023-07-12 10:58:20,124 DEBUG [rs(jenkins-hbase9.apache.org,43117,1689159488336)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(140): Closing snapshot operation on Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. 2023-07-12 10:58:20,125 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(312): Completed 1/1 local region snapshots. 2023-07-12 10:58:20,125 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(314): Completed 1 local region snapshots. 2023-07-12 10:58:20,125 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(345): cancelling 0 tasks for snapshot jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,125 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-12 10:58:20,125 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase9.apache.org,43117,1689159488336' in zk 2023-07-12 10:58:20,127 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-12 10:58:20,127 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,127 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-12 10:58:20,127 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,127 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-12 10:58:20,127 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-12 10:58:20,127 DEBUG [member: 'jenkins-hbase9.apache.org,43117,1689159488336' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-12 10:58:20,129 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-12 10:58:20,129 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-12 10:58:20,129 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-12 10:58:20,129 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:20,130 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,130 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:20,130 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:20,130 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-12 10:58:20,131 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-12 10:58:20,131 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:20,131 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,131 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:20,132 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:20,132 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'Group_testCloneSnapshot_snap' member 'jenkins-hbase9.apache.org,43117,1689159488336': 2023-07-12 10:58:20,132 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase9.apache.org,43117,1689159488336' released barrier for procedure'Group_testCloneSnapshot_snap', counting down latch. Waiting for 0 more 2023-07-12 10:58:20,132 INFO [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'Group_testCloneSnapshot_snap' execution completed 2023-07-12 10:58:20,132 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-07-12 10:58:20,132 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-07-12 10:58:20,132 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:Group_testCloneSnapshot_snap 2023-07-12 10:58:20,133 INFO [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure Group_testCloneSnapshot_snapincluding nodes /hbase/online-snapshot/acquired /hbase/online-snapshot/reached /hbase/online-snapshot/abort 2023-07-12 10:58:20,134 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,134 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,134 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,134 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,134 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-12 10:58:20,134 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-12 10:58:20,134 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,134 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,134 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-12 10:58:20,134 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,134 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,134 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,135 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-12 10:58:20,135 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-12 10:58:20,134 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,135 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-12 10:58:20,135 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,135 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-12 10:58:20,135 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:20,135 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,135 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,134 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-12 10:58:20,135 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-12 10:58:20,135 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,135 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:20,135 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-12 10:58:20,135 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,135 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:20,135 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-12 10:58:20,136 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:20,136 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-12 10:58:20,136 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:20,136 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,136 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-12 10:58:20,136 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,136 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,136 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,136 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:20,136 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-12 10:58:20,137 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:20,137 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:20,137 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,137 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:20,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:20,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-12 10:58:20,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-12 10:58:20,139 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:20,139 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:20,139 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,139 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,139 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:20,139 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:20,139 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:20,140 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:20,143 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:20,143 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,143 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-12 10:58:20,143 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-12 10:58:20,143 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-12 10:58:20,143 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-12 10:58:20,143 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-12 10:58:20,143 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:20,143 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-12 10:58:20,143 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:20,143 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-12 10:58:20,143 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:20,143 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-12 10:58:20,143 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-12 10:58:20,143 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-12 10:58:20,143 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-12 10:58:20,143 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:20,144 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:20,144 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-12 10:58:20,143 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0-0] snapshot.EnabledTableSnapshotHandler(97): Done waiting - online snapshot for Group_testCloneSnapshot_snap 2023-07-12 10:58:20,144 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:20,143 DEBUG [(jenkins-hbase9.apache.org,41017,1689159482181)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-12 10:58:20,143 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-12 10:58:20,144 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-12 10:58:20,144 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-12 10:58:20,144 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:20,144 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:20,144 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-12 10:58:20,144 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:20,144 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0-0] snapshot.SnapshotManifest(484): Convert to Single Snapshot Manifest for Group_testCloneSnapshot_snap 2023-07-12 10:58:20,144 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,145 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:20,145 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,145 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:20,145 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,145 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:20,145 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:20,146 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:20,146 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,146 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,147 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0-0] snapshot.SnapshotManifestV1(126): No regions under directory:hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,156 DEBUG [Listener at localhost/44831] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-12 10:58:20,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-12 10:58:20,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] snapshot.SnapshotManager(404): Snapshoting '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' is still in progress! 2023-07-12 10:58:20,160 DEBUG [Listener at localhost/44831] client.HBaseAdmin(2428): (#2) Sleeping: 200ms while waiting for snapshot completion. 2023-07-12 10:58:20,175 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0-0] snapshot.SnapshotDescriptionUtils(404): Sentinel is done, just moving the snapshot from hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.hbase-snapshot/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,201 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0-0] snapshot.TakeSnapshotHandler(229): Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed 2023-07-12 10:58:20,201 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0-0] snapshot.TakeSnapshotHandler(246): Launching cleanup of working dir:hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,201 ERROR [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0-0] snapshot.TakeSnapshotHandler(251): Couldn't delete snapshot working directory:hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-12 10:58:20,202 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0-0] snapshot.TakeSnapshotHandler(257): Table snapshot journal : Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot at 1689159500049Consolidate snapshot: Group_testCloneSnapshot_snap at 1689159500144 (+95 ms)Loading Region manifests for Group_testCloneSnapshot_snap at 1689159500145 (+1 ms)Writing data manifest for Group_testCloneSnapshot_snap at 1689159500155 (+10 ms)Verifying snapshot: Group_testCloneSnapshot_snap at 1689159500166 (+11 ms)Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed at 1689159500201 (+35 ms) 2023-07-12 10:58:20,204 DEBUG [PEWorker-1] locking.LockProcedure(242): UNLOCKED pid=99, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-12 10:58:20,206 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=99, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED in 153 msec 2023-07-12 10:58:20,360 DEBUG [Listener at localhost/44831] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-12 10:58:20,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-12 10:58:20,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] snapshot.SnapshotManager(401): Snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' has completed, notifying client. 2023-07-12 10:58:20,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint(486): Pre-moving table Group_testCloneSnapshot_clone to RSGroup default 2023-07-12 10:58:20,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:20,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:20,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:20,379 ERROR [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(742): TableDescriptor of table {} not found. Skipping the region movement of this table. 2023-07-12 10:58:20,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CLONE_SNAPSHOT_PRE_OPERATION; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689159500013 type: FLUSH version: 2 ttl: 0 ) 2023-07-12 10:58:20,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] snapshot.SnapshotManager(750): Clone snapshot=Group_testCloneSnapshot_snap as table=Group_testCloneSnapshot_clone 2023-07-12 10:58:20,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 10:58:20,414 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot_clone/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:20,419 INFO [PEWorker-3] snapshot.RestoreSnapshotHelper(177): starting restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689159500013 type: FLUSH version: 2 ttl: 0 2023-07-12 10:58:20,420 DEBUG [PEWorker-3] snapshot.RestoreSnapshotHelper(785): get table regions: hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot_clone 2023-07-12 10:58:20,420 INFO [PEWorker-3] snapshot.RestoreSnapshotHelper(239): region to add: 8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:20,420 INFO [PEWorker-3] snapshot.RestoreSnapshotHelper(585): clone region=8fff6e822c369895d5dae9f4dde7d555 as 5382396768adde3ffcfa521569ae43c3 in snapshot Group_testCloneSnapshot_snap 2023-07-12 10:58:20,421 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5382396768adde3ffcfa521569ae43c3, NAME => 'Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot_clone', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:20,432 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:20,432 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1604): Closing 5382396768adde3ffcfa521569ae43c3, disabling compactions & flushes 2023-07-12 10:58:20,432 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3. 2023-07-12 10:58:20,432 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3. 2023-07-12 10:58:20,432 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3. after waiting 0 ms 2023-07-12 10:58:20,432 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3. 2023-07-12 10:58:20,432 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3. 2023-07-12 10:58:20,432 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for 5382396768adde3ffcfa521569ae43c3: 2023-07-12 10:58:20,432 INFO [PEWorker-3] snapshot.RestoreSnapshotHelper(266): finishing restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689159500013 type: FLUSH version: 2 ttl: 0 2023-07-12 10:58:20,433 INFO [PEWorker-3] procedure.CloneSnapshotProcedure$1(421): Clone snapshot=Group_testCloneSnapshot_snap on table=Group_testCloneSnapshot_clone completed! 2023-07-12 10:58:20,437 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689159500437"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159500437"}]},"ts":"1689159500437"} 2023-07-12 10:58:20,439 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:20,440 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159500440"}]},"ts":"1689159500440"} 2023-07-12 10:58:20,442 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLING in hbase:meta 2023-07-12 10:58:20,446 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:20,446 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:20,446 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:20,446 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:20,446 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 10:58:20,446 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:20,447 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=5382396768adde3ffcfa521569ae43c3, ASSIGN}] 2023-07-12 10:58:20,449 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=5382396768adde3ffcfa521569ae43c3, ASSIGN 2023-07-12 10:58:20,450 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=5382396768adde3ffcfa521569ae43c3, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43635,1689159491271; forceNewPlan=false, retain=false 2023-07-12 10:58:20,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 10:58:20,600 INFO [jenkins-hbase9:41017] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:20,602 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=5382396768adde3ffcfa521569ae43c3, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:20,602 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689159500602"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159500602"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159500602"}]},"ts":"1689159500602"} 2023-07-12 10:58:20,604 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure 5382396768adde3ffcfa521569ae43c3, server=jenkins-hbase9.apache.org,43635,1689159491271}] 2023-07-12 10:58:20,607 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testCloneSnapshot' 2023-07-12 10:58:20,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 10:58:20,760 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3. 2023-07-12 10:58:20,760 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5382396768adde3ffcfa521569ae43c3, NAME => 'Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:20,760 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot_clone 5382396768adde3ffcfa521569ae43c3 2023-07-12 10:58:20,760 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:20,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 5382396768adde3ffcfa521569ae43c3 2023-07-12 10:58:20,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 5382396768adde3ffcfa521569ae43c3 2023-07-12 10:58:20,762 INFO [StoreOpener-5382396768adde3ffcfa521569ae43c3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region 5382396768adde3ffcfa521569ae43c3 2023-07-12 10:58:20,764 DEBUG [StoreOpener-5382396768adde3ffcfa521569ae43c3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCloneSnapshot_clone/5382396768adde3ffcfa521569ae43c3/test 2023-07-12 10:58:20,764 DEBUG [StoreOpener-5382396768adde3ffcfa521569ae43c3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCloneSnapshot_clone/5382396768adde3ffcfa521569ae43c3/test 2023-07-12 10:58:20,764 INFO [StoreOpener-5382396768adde3ffcfa521569ae43c3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5382396768adde3ffcfa521569ae43c3 columnFamilyName test 2023-07-12 10:58:20,765 INFO [StoreOpener-5382396768adde3ffcfa521569ae43c3-1] regionserver.HStore(310): Store=5382396768adde3ffcfa521569ae43c3/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:20,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCloneSnapshot_clone/5382396768adde3ffcfa521569ae43c3 2023-07-12 10:58:20,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCloneSnapshot_clone/5382396768adde3ffcfa521569ae43c3 2023-07-12 10:58:20,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 5382396768adde3ffcfa521569ae43c3 2023-07-12 10:58:20,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCloneSnapshot_clone/5382396768adde3ffcfa521569ae43c3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:20,774 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 5382396768adde3ffcfa521569ae43c3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10729629440, jitterRate=-7.253885269165039E-4}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:20,774 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 5382396768adde3ffcfa521569ae43c3: 2023-07-12 10:58:20,775 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3., pid=102, masterSystemTime=1689159500756 2023-07-12 10:58:20,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3. 2023-07-12 10:58:20,777 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3. 2023-07-12 10:58:20,778 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=5382396768adde3ffcfa521569ae43c3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:20,778 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689159500778"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159500778"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159500778"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159500778"}]},"ts":"1689159500778"} 2023-07-12 10:58:20,782 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-12 10:58:20,782 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure 5382396768adde3ffcfa521569ae43c3, server=jenkins-hbase9.apache.org,43635,1689159491271 in 176 msec 2023-07-12 10:58:20,784 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-12 10:58:20,784 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=5382396768adde3ffcfa521569ae43c3, ASSIGN in 336 msec 2023-07-12 10:58:20,785 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159500785"}]},"ts":"1689159500785"} 2023-07-12 10:58:20,786 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLED in hbase:meta 2023-07-12 10:58:20,790 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689159500013 type: FLUSH version: 2 ttl: 0 ) in 404 msec 2023-07-12 10:58:20,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 10:58:20,999 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: MODIFY, Table Name: default:Group_testCloneSnapshot_clone, procId: 100 completed 2023-07-12 10:58:21,001 INFO [Listener at localhost/44831] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot 2023-07-12 10:58:21,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testCloneSnapshot 2023-07-12 10:58:21,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:21,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-12 10:58:21,005 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159501005"}]},"ts":"1689159501005"} 2023-07-12 10:58:21,006 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLING in hbase:meta 2023-07-12 10:58:21,009 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot to state=DISABLING 2023-07-12 10:58:21,010 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=8fff6e822c369895d5dae9f4dde7d555, UNASSIGN}] 2023-07-12 10:58:21,011 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=8fff6e822c369895d5dae9f4dde7d555, UNASSIGN 2023-07-12 10:58:21,012 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=8fff6e822c369895d5dae9f4dde7d555, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:21,012 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689159501012"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159501012"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159501012"}]},"ts":"1689159501012"} 2023-07-12 10:58:21,013 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE; CloseRegionProcedure 8fff6e822c369895d5dae9f4dde7d555, server=jenkins-hbase9.apache.org,43117,1689159488336}] 2023-07-12 10:58:21,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-12 10:58:21,165 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:21,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 8fff6e822c369895d5dae9f4dde7d555, disabling compactions & flushes 2023-07-12 10:58:21,166 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. 2023-07-12 10:58:21,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. 2023-07-12 10:58:21,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. after waiting 0 ms 2023-07-12 10:58:21,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. 2023-07-12 10:58:21,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555/recovered.edits/5.seqid, newMaxSeqId=5, maxSeqId=1 2023-07-12 10:58:21,171 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555. 2023-07-12 10:58:21,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 8fff6e822c369895d5dae9f4dde7d555: 2023-07-12 10:58:21,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:21,173 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=8fff6e822c369895d5dae9f4dde7d555, regionState=CLOSED 2023-07-12 10:58:21,173 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689159501172"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159501172"}]},"ts":"1689159501172"} 2023-07-12 10:58:21,176 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-12 10:58:21,176 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; CloseRegionProcedure 8fff6e822c369895d5dae9f4dde7d555, server=jenkins-hbase9.apache.org,43117,1689159488336 in 161 msec 2023-07-12 10:58:21,177 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=103 2023-07-12 10:58:21,177 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=103, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=8fff6e822c369895d5dae9f4dde7d555, UNASSIGN in 166 msec 2023-07-12 10:58:21,178 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159501178"}]},"ts":"1689159501178"} 2023-07-12 10:58:21,179 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLED in hbase:meta 2023-07-12 10:58:21,184 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot to state=DISABLED 2023-07-12 10:58:21,186 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot in 183 msec 2023-07-12 10:58:21,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-12 10:58:21,307 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot, procId: 103 completed 2023-07-12 10:58:21,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete Group_testCloneSnapshot 2023-07-12 10:58:21,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:21,311 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:21,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot' from rsgroup 'default' 2023-07-12 10:58:21,312 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=106, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:21,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:21,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:21,317 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:21,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-12 10:58:21,319 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555/recovered.edits, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555/test] 2023-07-12 10:58:21,325 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555/recovered.edits/5.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555/recovered.edits/5.seqid 2023-07-12 10:58:21,327 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot/8fff6e822c369895d5dae9f4dde7d555 2023-07-12 10:58:21,327 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-12 10:58:21,329 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=106, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:21,331 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot from hbase:meta 2023-07-12 10:58:21,332 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot' descriptor. 2023-07-12 10:58:21,334 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=106, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:21,334 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot' from region states. 2023-07-12 10:58:21,334 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159501334"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:21,335 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 10:58:21,335 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8fff6e822c369895d5dae9f4dde7d555, NAME => 'Group_testCloneSnapshot,,1689159499390.8fff6e822c369895d5dae9f4dde7d555.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 10:58:21,335 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot' as deleted. 2023-07-12 10:58:21,335 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159501335"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:21,337 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot state from META 2023-07-12 10:58:21,340 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=106, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:21,341 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot in 32 msec 2023-07-12 10:58:21,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-12 10:58:21,420 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot, procId: 106 completed 2023-07-12 10:58:21,421 INFO [Listener at localhost/44831] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot_clone 2023-07-12 10:58:21,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testCloneSnapshot_clone 2023-07-12 10:58:21,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-12 10:58:21,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-12 10:58:21,427 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159501427"}]},"ts":"1689159501427"} 2023-07-12 10:58:21,429 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLING in hbase:meta 2023-07-12 10:58:21,430 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot_clone to state=DISABLING 2023-07-12 10:58:21,431 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=5382396768adde3ffcfa521569ae43c3, UNASSIGN}] 2023-07-12 10:58:21,433 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=5382396768adde3ffcfa521569ae43c3, UNASSIGN 2023-07-12 10:58:21,433 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=5382396768adde3ffcfa521569ae43c3, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:21,433 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689159501433"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159501433"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159501433"}]},"ts":"1689159501433"} 2023-07-12 10:58:21,437 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE; CloseRegionProcedure 5382396768adde3ffcfa521569ae43c3, server=jenkins-hbase9.apache.org,43635,1689159491271}] 2023-07-12 10:58:21,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-12 10:58:21,589 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 5382396768adde3ffcfa521569ae43c3 2023-07-12 10:58:21,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 5382396768adde3ffcfa521569ae43c3, disabling compactions & flushes 2023-07-12 10:58:21,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3. 2023-07-12 10:58:21,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3. 2023-07-12 10:58:21,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3. after waiting 0 ms 2023-07-12 10:58:21,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3. 2023-07-12 10:58:21,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/default/Group_testCloneSnapshot_clone/5382396768adde3ffcfa521569ae43c3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:21,598 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3. 2023-07-12 10:58:21,598 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 5382396768adde3ffcfa521569ae43c3: 2023-07-12 10:58:21,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 5382396768adde3ffcfa521569ae43c3 2023-07-12 10:58:21,600 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=5382396768adde3ffcfa521569ae43c3, regionState=CLOSED 2023-07-12 10:58:21,600 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689159501600"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159501600"}]},"ts":"1689159501600"} 2023-07-12 10:58:21,603 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-12 10:58:21,604 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; CloseRegionProcedure 5382396768adde3ffcfa521569ae43c3, server=jenkins-hbase9.apache.org,43635,1689159491271 in 167 msec 2023-07-12 10:58:21,605 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-12 10:58:21,605 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=5382396768adde3ffcfa521569ae43c3, UNASSIGN in 172 msec 2023-07-12 10:58:21,606 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159501606"}]},"ts":"1689159501606"} 2023-07-12 10:58:21,614 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLED in hbase:meta 2023-07-12 10:58:21,615 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot_clone to state=DISABLED 2023-07-12 10:58:21,617 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone in 194 msec 2023-07-12 10:58:21,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-12 10:58:21,730 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot_clone, procId: 107 completed 2023-07-12 10:58:21,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete Group_testCloneSnapshot_clone 2023-07-12 10:58:21,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-12 10:58:21,734 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-12 10:58:21,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot_clone' from rsgroup 'default' 2023-07-12 10:58:21,735 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=110, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-12 10:58:21,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:21,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:21,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-12 10:58:21,741 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot_clone/5382396768adde3ffcfa521569ae43c3 2023-07-12 10:58:21,743 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot_clone/5382396768adde3ffcfa521569ae43c3/recovered.edits, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot_clone/5382396768adde3ffcfa521569ae43c3/test] 2023-07-12 10:58:21,748 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot_clone/5382396768adde3ffcfa521569ae43c3/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/default/Group_testCloneSnapshot_clone/5382396768adde3ffcfa521569ae43c3/recovered.edits/4.seqid 2023-07-12 10:58:21,750 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/default/Group_testCloneSnapshot_clone/5382396768adde3ffcfa521569ae43c3 2023-07-12 10:58:21,750 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot_clone regions 2023-07-12 10:58:21,752 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=110, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-12 10:58:21,754 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot_clone from hbase:meta 2023-07-12 10:58:21,755 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot_clone' descriptor. 2023-07-12 10:58:21,756 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=110, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-12 10:58:21,756 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot_clone' from region states. 2023-07-12 10:58:21,757 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159501756"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:21,762 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 10:58:21,762 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5382396768adde3ffcfa521569ae43c3, NAME => 'Group_testCloneSnapshot_clone,,1689159499390.5382396768adde3ffcfa521569ae43c3.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 10:58:21,762 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot_clone' as deleted. 2023-07-12 10:58:21,762 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159501762"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:21,763 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot_clone state from META 2023-07-12 10:58:21,766 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=110, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-12 10:58:21,767 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone in 35 msec 2023-07-12 10:58:21,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-12 10:58:21,842 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot_clone, procId: 110 completed 2023-07-12 10:58:21,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:21,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:21,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:21,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:21,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:21,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:21,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:21,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:21,858 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:21,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:21,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:21,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:21,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:21,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:21,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:21,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 563 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160701869, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:21,870 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:21,872 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:21,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,873 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:21,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:21,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:21,892 INFO [Listener at localhost/44831] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=515 (was 515), OpenFileDescriptor=811 (was 817), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=312 (was 312), ProcessCount=170 (was 170), AvailableMemoryMB=8204 (was 8041) - AvailableMemoryMB LEAK? - 2023-07-12 10:58:21,892 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-12 10:58:21,909 INFO [Listener at localhost/44831] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=515, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=312, ProcessCount=170, AvailableMemoryMB=8204 2023-07-12 10:58:21,910 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-12 10:58:21,910 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(132): testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:21,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:21,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:21,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:21,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:21,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:21,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:21,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:21,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:21,929 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:21,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:21,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:21,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:21,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:21,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:21,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:21,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 591 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160701940, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:21,941 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:21,943 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:21,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,944 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:21,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:21,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:21,945 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBasics(141): testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:21,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:21,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:21,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup appInfo 2023-07-12 10:58:21,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:21,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-12 10:58:21,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:21,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:21,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:42501] to rsgroup appInfo 2023-07-12 10:58:21,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:21,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-12 10:58:21,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:21,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 10:58:21,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,42501,1689159484335] are moved back to default 2023-07-12 10:58:21,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-12 10:58:21,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:21,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=appInfo 2023-07-12 10:58:21,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:21,994 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-12 10:58:21,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.ServerManager(636): Server jenkins-hbase9.apache.org,42501,1689159484335 added to draining server list. 2023-07-12 10:58:21,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$15(3014): Client=jenkins//172.31.2.10 creating {NAME => 'Group_ns', hbase.rsgroup.name => 'appInfo'} 2023-07-12 10:58:22,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_ns 2023-07-12 10:58:22,002 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/draining/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:22,003 WARN [zk-event-processor-pool-0] master.ServerManager(632): Server jenkins-hbase9.apache.org,42501,1689159484335 is already in the draining server list.Ignoring request to add it again. 2023-07-12 10:58:22,003 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(92): Draining RS node created, adding to list [jenkins-hbase9.apache.org,42501,1689159484335] 2023-07-12 10:58:22,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 10:58:22,011 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:22,014 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns in 14 msec 2023-07-12 10:58:22,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 10:58:22,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:22,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:22,110 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=112, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:22,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 112 2023-07-12 10:58:22,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-12 10:58:22,124 INFO [PEWorker-4] procedure2.ProcedureExecutor(1528): Rolled back pid=112, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers exec-time=16 msec 2023-07-12 10:58:22,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-12 10:58:22,215 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 112 failed with No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to 2023-07-12 10:58:22,215 DEBUG [Listener at localhost/44831] rsgroup.TestRSGroupsBasics(162): create table error org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to at java.lang.Thread.getStackTrace(Thread.java:1564) at org.apache.hadoop.hbase.util.FutureUtils.setStackTrace(FutureUtils.java:130) at org.apache.hadoop.hbase.util.FutureUtils.rethrow(FutureUtils.java:149) at org.apache.hadoop.hbase.util.FutureUtils.get(FutureUtils.java:186) at org.apache.hadoop.hbase.client.Admin.createTable(Admin.java:302) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.testCreateWhenRsgroupNoOnlineServers(TestRSGroupsBasics.java:159) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) at --------Future.get--------(Unknown Source) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.validateRSGroup(RSGroupAdminEndpoint.java:540) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.moveTableToValidRSGroup(RSGroupAdminEndpoint.java:529) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateTableAction(RSGroupAdminEndpoint.java:501) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:371) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateTableAction(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.preCreate(CreateTableProcedure.java:267) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:93) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-12 10:58:22,222 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/draining/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:22,222 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-12 10:58:22,222 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(109): Draining RS node deleted, removing from list [jenkins-hbase9.apache.org,42501,1689159484335] 2023-07-12 10:58:22,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:22,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:22,228 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=113, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:22,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 113 2023-07-12 10:58:22,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-12 10:58:22,230 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:22,231 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:22,231 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-12 10:58:22,231 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:22,233 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=113, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:22,235 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7 2023-07-12 10:58:22,235 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7 empty. 2023-07-12 10:58:22,236 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7 2023-07-12 10:58:22,236 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-12 10:58:22,249 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:22,250 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(7675): creating {ENCODED => e91c46b2a26c8984176ee1d09e735ae7, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:22,263 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:22,263 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1604): Closing e91c46b2a26c8984176ee1d09e735ae7, disabling compactions & flushes 2023-07-12 10:58:22,263 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7. 2023-07-12 10:58:22,263 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7. 2023-07-12 10:58:22,263 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7. after waiting 0 ms 2023-07-12 10:58:22,263 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7. 2023-07-12 10:58:22,263 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7. 2023-07-12 10:58:22,263 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1558): Region close journal for e91c46b2a26c8984176ee1d09e735ae7: 2023-07-12 10:58:22,265 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=113, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:22,266 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159502266"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159502266"}]},"ts":"1689159502266"} 2023-07-12 10:58:22,267 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:22,268 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=113, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:22,268 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159502268"}]},"ts":"1689159502268"} 2023-07-12 10:58:22,269 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLING in hbase:meta 2023-07-12 10:58:22,272 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=e91c46b2a26c8984176ee1d09e735ae7, ASSIGN}] 2023-07-12 10:58:22,274 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=e91c46b2a26c8984176ee1d09e735ae7, ASSIGN 2023-07-12 10:58:22,275 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=e91c46b2a26c8984176ee1d09e735ae7, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,42501,1689159484335; forceNewPlan=false, retain=false 2023-07-12 10:58:22,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-12 10:58:22,427 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=e91c46b2a26c8984176ee1d09e735ae7, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:22,427 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159502427"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159502427"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159502427"}]},"ts":"1689159502427"} 2023-07-12 10:58:22,428 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; OpenRegionProcedure e91c46b2a26c8984176ee1d09e735ae7, server=jenkins-hbase9.apache.org,42501,1689159484335}] 2023-07-12 10:58:22,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-12 10:58:22,584 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7. 2023-07-12 10:58:22,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e91c46b2a26c8984176ee1d09e735ae7, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:22,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testCreateWhenRsgroupNoOnlineServers e91c46b2a26c8984176ee1d09e735ae7 2023-07-12 10:58:22,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:22,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e91c46b2a26c8984176ee1d09e735ae7 2023-07-12 10:58:22,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e91c46b2a26c8984176ee1d09e735ae7 2023-07-12 10:58:22,586 INFO [StoreOpener-e91c46b2a26c8984176ee1d09e735ae7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e91c46b2a26c8984176ee1d09e735ae7 2023-07-12 10:58:22,588 DEBUG [StoreOpener-e91c46b2a26c8984176ee1d09e735ae7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7/f 2023-07-12 10:58:22,588 DEBUG [StoreOpener-e91c46b2a26c8984176ee1d09e735ae7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7/f 2023-07-12 10:58:22,589 INFO [StoreOpener-e91c46b2a26c8984176ee1d09e735ae7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e91c46b2a26c8984176ee1d09e735ae7 columnFamilyName f 2023-07-12 10:58:22,590 INFO [StoreOpener-e91c46b2a26c8984176ee1d09e735ae7-1] regionserver.HStore(310): Store=e91c46b2a26c8984176ee1d09e735ae7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:22,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7 2023-07-12 10:58:22,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7 2023-07-12 10:58:22,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e91c46b2a26c8984176ee1d09e735ae7 2023-07-12 10:58:22,596 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:22,597 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e91c46b2a26c8984176ee1d09e735ae7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11322273760, jitterRate=0.05446891486644745}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:22,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e91c46b2a26c8984176ee1d09e735ae7: 2023-07-12 10:58:22,598 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7., pid=115, masterSystemTime=1689159502580 2023-07-12 10:58:22,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7. 2023-07-12 10:58:22,599 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7. 2023-07-12 10:58:22,600 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=e91c46b2a26c8984176ee1d09e735ae7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:22,600 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159502600"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159502600"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159502600"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159502600"}]},"ts":"1689159502600"} 2023-07-12 10:58:22,606 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-12 10:58:22,606 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; OpenRegionProcedure e91c46b2a26c8984176ee1d09e735ae7, server=jenkins-hbase9.apache.org,42501,1689159484335 in 173 msec 2023-07-12 10:58:22,608 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-12 10:58:22,608 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=e91c46b2a26c8984176ee1d09e735ae7, ASSIGN in 334 msec 2023-07-12 10:58:22,609 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=113, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:22,609 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159502609"}]},"ts":"1689159502609"} 2023-07-12 10:58:22,611 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLED in hbase:meta 2023-07-12 10:58:22,613 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=113, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:22,614 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 388 msec 2023-07-12 10:58:22,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-12 10:58:22,833 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 113 completed 2023-07-12 10:58:22,834 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:22,840 INFO [Listener at localhost/44831] client.HBaseAdmin$15(890): Started disable of Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:22,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:22,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:22,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-12 10:58:22,845 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159502845"}]},"ts":"1689159502845"} 2023-07-12 10:58:22,847 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLING in hbase:meta 2023-07-12 10:58:22,848 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLING 2023-07-12 10:58:22,852 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=117, ppid=116, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=e91c46b2a26c8984176ee1d09e735ae7, UNASSIGN}] 2023-07-12 10:58:22,854 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, ppid=116, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=e91c46b2a26c8984176ee1d09e735ae7, UNASSIGN 2023-07-12 10:58:22,855 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=e91c46b2a26c8984176ee1d09e735ae7, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:22,856 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159502855"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159502855"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159502855"}]},"ts":"1689159502855"} 2023-07-12 10:58:22,859 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure e91c46b2a26c8984176ee1d09e735ae7, server=jenkins-hbase9.apache.org,42501,1689159484335}] 2023-07-12 10:58:22,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-12 10:58:23,011 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close e91c46b2a26c8984176ee1d09e735ae7 2023-07-12 10:58:23,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e91c46b2a26c8984176ee1d09e735ae7, disabling compactions & flushes 2023-07-12 10:58:23,012 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7. 2023-07-12 10:58:23,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7. 2023-07-12 10:58:23,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7. after waiting 0 ms 2023-07-12 10:58:23,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7. 2023-07-12 10:58:23,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:23,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7. 2023-07-12 10:58:23,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e91c46b2a26c8984176ee1d09e735ae7: 2023-07-12 10:58:23,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed e91c46b2a26c8984176ee1d09e735ae7 2023-07-12 10:58:23,021 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=e91c46b2a26c8984176ee1d09e735ae7, regionState=CLOSED 2023-07-12 10:58:23,021 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159503021"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159503021"}]},"ts":"1689159503021"} 2023-07-12 10:58:23,024 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-12 10:58:23,024 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure e91c46b2a26c8984176ee1d09e735ae7, server=jenkins-hbase9.apache.org,42501,1689159484335 in 163 msec 2023-07-12 10:58:23,025 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=117, resume processing ppid=116 2023-07-12 10:58:23,025 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=117, ppid=116, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=e91c46b2a26c8984176ee1d09e735ae7, UNASSIGN in 175 msec 2023-07-12 10:58:23,026 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159503026"}]},"ts":"1689159503026"} 2023-07-12 10:58:23,027 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLED in hbase:meta 2023-07-12 10:58:23,029 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLED 2023-07-12 10:58:23,030 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 189 msec 2023-07-12 10:58:23,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-12 10:58:23,147 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 116 completed 2023-07-12 10:58:23,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:23,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=119, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:23,150 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=119, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:23,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from rsgroup 'appInfo' 2023-07-12 10:58:23,151 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=119, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:23,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:23,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:23,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-12 10:58:23,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:23,155 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7 2023-07-12 10:58:23,156 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7/f, FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7/recovered.edits] 2023-07-12 10:58:23,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=119 2023-07-12 10:58:23,161 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7/recovered.edits/4.seqid to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7/recovered.edits/4.seqid 2023-07-12 10:58:23,162 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/e91c46b2a26c8984176ee1d09e735ae7 2023-07-12 10:58:23,162 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-12 10:58:23,164 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=119, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:23,166 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_ns:testCreateWhenRsgroupNoOnlineServers from hbase:meta 2023-07-12 10:58:23,167 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' descriptor. 2023-07-12 10:58:23,168 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=119, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:23,168 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from region states. 2023-07-12 10:58:23,168 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159503168"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:23,169 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 10:58:23,169 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e91c46b2a26c8984176ee1d09e735ae7, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689159502224.e91c46b2a26c8984176ee1d09e735ae7.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 10:58:23,169 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_ns:testCreateWhenRsgroupNoOnlineServers' as deleted. 2023-07-12 10:58:23,170 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159503170"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:23,171 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_ns:testCreateWhenRsgroupNoOnlineServers state from META 2023-07-12 10:58:23,172 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=119, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:23,174 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=119, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 25 msec 2023-07-12 10:58:23,223 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 10:58:23,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=119 2023-07-12 10:58:23,259 INFO [Listener at localhost/44831] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 119 completed 2023-07-12 10:58:23,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.HMaster$17(3086): Client=jenkins//172.31.2.10 delete Group_ns 2023-07-12 10:58:23,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-12 10:58:23,266 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=120, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-12 10:58:23,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-12 10:58:23,269 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=120, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-12 10:58:23,271 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=120, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-12 10:58:23,272 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_ns 2023-07-12 10:58:23,272 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:23,273 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=120, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-12 10:58:23,275 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=120, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-12 10:58:23,276 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns in 12 msec 2023-07-12 10:58:23,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-12 10:58:23,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:23,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:23,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:23,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:23,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:23,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:23,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:23,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:23,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:23,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-12 10:58:23,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 10:58:23,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:23,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:23,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:23,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:23,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:42501] to rsgroup default 2023-07-12 10:58:23,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:23,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-12 10:58:23,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:23,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-12 10:58:23,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,42501,1689159484335] are moved back to appInfo 2023-07-12 10:58:23,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-12 10:58:23,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:23,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup appInfo 2023-07-12 10:58:23,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:23,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:23,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:23,396 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:23,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:23,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:23,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:23,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:23,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:23,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:23,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:23,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:23,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:23,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 693 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160703407, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:23,407 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:23,409 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:23,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:23,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:23,410 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:23,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:23,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:23,429 INFO [Listener at localhost/44831] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=516 (was 515) Potentially hanging thread: hconnection-0x324c1766-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x324c1766-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1281706449_17 at /127.0.0.1:33906 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=811 (was 811), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=312 (was 312), ProcessCount=170 (was 170), AvailableMemoryMB=8220 (was 8204) - AvailableMemoryMB LEAK? - 2023-07-12 10:58:23,429 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-12 10:58:23,445 INFO [Listener at localhost/44831] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=516, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=312, ProcessCount=170, AvailableMemoryMB=8220 2023-07-12 10:58:23,445 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-12 10:58:23,445 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(132): testBasicStartUp 2023-07-12 10:58:23,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:23,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:23,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:23,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:23,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:23,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:23,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:23,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:23,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:23,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:23,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:23,458 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:23,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:23,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:23,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:23,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:23,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:23,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:23,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:23,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:23,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:23,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 721 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160703468, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:23,469 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:23,470 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:23,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:23,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:23,471 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:23,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:23,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:23,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:23,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:23,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:23,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:23,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:23,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:23,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:23,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:23,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:23,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:23,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:23,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:23,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:23,487 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:23,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:23,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:23,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:23,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:23,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:23,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:23,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:23,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:23,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:23,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 751 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160703497, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:23,498 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:23,500 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:23,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:23,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:23,502 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:23,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:23,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:23,522 INFO [Listener at localhost/44831] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=517 (was 516) - Thread LEAK? -, OpenFileDescriptor=811 (was 811), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=312 (was 312), ProcessCount=170 (was 170), AvailableMemoryMB=8220 (was 8220) 2023-07-12 10:58:23,522 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-12 10:58:23,539 INFO [Listener at localhost/44831] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=517, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=312, ProcessCount=170, AvailableMemoryMB=8219 2023-07-12 10:58:23,539 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-12 10:58:23,539 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(132): testRSGroupsWithHBaseQuota 2023-07-12 10:58:23,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:23,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:23,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:23,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:23,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:23,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:23,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:23,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:23,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:23,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:23,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:23,561 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:23,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:23,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:23,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:23,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:23,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:23,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:23,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:23,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:41017] to rsgroup master 2023-07-12 10:58:23,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:23,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] ipc.CallRunner(144): callId: 779 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:45870 deadline: 1689160703570, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. 2023-07-12 10:58:23,571 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor63.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:41017 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:23,572 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:23,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:23,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:23,573 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:42501, jenkins-hbase9.apache.org:43117, jenkins-hbase9.apache.org:43635, jenkins-hbase9.apache.org:45597], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:23,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:23,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41017] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:23,574 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-12 10:58:23,574 INFO [Listener at localhost/44831] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 10:58:23,574 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x15f52062 to 127.0.0.1:49301 2023-07-12 10:58:23,574 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:23,575 DEBUG [Listener at localhost/44831] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 10:58:23,575 DEBUG [Listener at localhost/44831] util.JVMClusterUtil(257): Found active master hash=1246431688, stopped=false 2023-07-12 10:58:23,575 DEBUG [Listener at localhost/44831] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 10:58:23,575 DEBUG [Listener at localhost/44831] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 10:58:23,576 INFO [Listener at localhost/44831] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase9.apache.org,41017,1689159482181 2023-07-12 10:58:23,578 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:23,578 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:23,578 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:23,578 INFO [Listener at localhost/44831] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 10:58:23,578 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:23,578 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:23,578 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:23,578 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:23,578 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:23,578 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2b1f7a51 to 127.0.0.1:49301 2023-07-12 10:58:23,578 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:23,579 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:23,579 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:23,579 INFO [Listener at localhost/44831] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,42501,1689159484335' ***** 2023-07-12 10:58:23,579 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:23,579 INFO [Listener at localhost/44831] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:23,579 INFO [Listener at localhost/44831] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,45597,1689159484713' ***** 2023-07-12 10:58:23,579 INFO [Listener at localhost/44831] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:23,579 INFO [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:23,580 INFO [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:23,580 INFO [Listener at localhost/44831] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,43117,1689159488336' ***** 2023-07-12 10:58:23,580 INFO [Listener at localhost/44831] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:23,581 INFO [Listener at localhost/44831] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,43635,1689159491271' ***** 2023-07-12 10:58:23,583 INFO [Listener at localhost/44831] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:23,581 INFO [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:23,586 INFO [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:23,590 INFO [RS:4;jenkins-hbase9:43635] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@30087dd3{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:23,590 INFO [RS:0;jenkins-hbase9:42501] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5801de9e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:23,590 INFO [RS:3;jenkins-hbase9:43117] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3d872323{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:23,590 INFO [RS:2;jenkins-hbase9:45597] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4fed8179{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:23,591 INFO [RS:4;jenkins-hbase9:43635] server.AbstractConnector(383): Stopped ServerConnector@6dad6690{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:23,591 INFO [RS:0;jenkins-hbase9:42501] server.AbstractConnector(383): Stopped ServerConnector@9ce6c0e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:23,591 INFO [RS:3;jenkins-hbase9:43117] server.AbstractConnector(383): Stopped ServerConnector@bebdc87{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:23,591 INFO [RS:4;jenkins-hbase9:43635] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:23,591 INFO [RS:3;jenkins-hbase9:43117] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:23,591 INFO [RS:0;jenkins-hbase9:42501] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:23,591 INFO [RS:4;jenkins-hbase9:43635] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3492ad1a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:23,591 INFO [RS:2;jenkins-hbase9:45597] server.AbstractConnector(383): Stopped ServerConnector@100e6ca1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:23,594 INFO [RS:4;jenkins-hbase9:43635] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@f0c18f5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:23,594 INFO [RS:2;jenkins-hbase9:45597] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:23,594 INFO [RS:0;jenkins-hbase9:42501] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4b249fe8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:23,592 INFO [RS:3;jenkins-hbase9:43117] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@104b60bc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:23,595 INFO [RS:2;jenkins-hbase9:45597] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7bebd089{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:23,596 INFO [RS:3;jenkins-hbase9:43117] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@357467f6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:23,595 INFO [RS:0;jenkins-hbase9:42501] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7f13c5ce{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:23,596 INFO [RS:4;jenkins-hbase9:43635] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:23,596 INFO [RS:2;jenkins-hbase9:45597] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4a70f486{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:23,597 INFO [RS:4;jenkins-hbase9:43635] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:23,597 INFO [RS:4;jenkins-hbase9:43635] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:23,597 INFO [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(3305): Received CLOSE for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:23,597 INFO [RS:3;jenkins-hbase9:43117] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:23,597 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:23,597 INFO [RS:3;jenkins-hbase9:43117] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:23,597 INFO [RS:3;jenkins-hbase9:43117] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:23,597 INFO [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:23,597 INFO [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:23,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e5addb24bba6e8be9d4cddc12a45ff25, disabling compactions & flushes 2023-07-12 10:58:23,597 DEBUG [RS:4;jenkins-hbase9:43635] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4fc9ca10 to 127.0.0.1:49301 2023-07-12 10:58:23,598 DEBUG [RS:4;jenkins-hbase9:43635] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:23,597 DEBUG [RS:3;jenkins-hbase9:43117] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x662cd978 to 127.0.0.1:49301 2023-07-12 10:58:23,598 DEBUG [RS:3;jenkins-hbase9:43117] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:23,598 INFO [RS:3;jenkins-hbase9:43117] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:23,598 INFO [RS:3;jenkins-hbase9:43117] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:23,598 INFO [RS:3;jenkins-hbase9:43117] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:23,598 INFO [RS:0;jenkins-hbase9:42501] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:23,598 INFO [RS:0;jenkins-hbase9:42501] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:23,598 INFO [RS:0;jenkins-hbase9:42501] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:23,598 INFO [RS:2;jenkins-hbase9:45597] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:23,598 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:23,598 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:23,598 INFO [RS:2;jenkins-hbase9:45597] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:23,599 INFO [RS:2;jenkins-hbase9:45597] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:23,598 INFO [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 10:58:23,599 INFO [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(3305): Received CLOSE for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:23,599 INFO [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:23,599 DEBUG [RS:2;jenkins-hbase9:45597] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6cb4b525 to 127.0.0.1:49301 2023-07-12 10:58:23,599 DEBUG [RS:2;jenkins-hbase9:45597] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:23,599 INFO [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 10:58:23,599 DEBUG [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1478): Online Regions={0832c48321f808d3b4d6fb68605b1448=hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.} 2023-07-12 10:58:23,600 DEBUG [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1504): Waiting on 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:23,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 0832c48321f808d3b4d6fb68605b1448, disabling compactions & flushes 2023-07-12 10:58:23,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:23,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:23,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. after waiting 0 ms 2023-07-12 10:58:23,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:23,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 0832c48321f808d3b4d6fb68605b1448 1/1 column families, dataSize=9.58 KB heapSize=15.67 KB 2023-07-12 10:58:23,598 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:23,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:23,597 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:23,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. after waiting 0 ms 2023-07-12 10:58:23,599 DEBUG [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1478): Online Regions={e5addb24bba6e8be9d4cddc12a45ff25=hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.} 2023-07-12 10:58:23,601 DEBUG [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1504): Waiting on e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:23,598 INFO [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:23,598 INFO [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 10:58:23,602 DEBUG [RS:0;jenkins-hbase9:42501] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5375f53a to 127.0.0.1:49301 2023-07-12 10:58:23,602 DEBUG [RS:0;jenkins-hbase9:42501] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:23,602 INFO [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,42501,1689159484335; all regions closed. 2023-07-12 10:58:23,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:23,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing e5addb24bba6e8be9d4cddc12a45ff25 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-12 10:58:23,603 INFO [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 10:58:23,603 DEBUG [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-12 10:58:23,603 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 10:58:23,603 DEBUG [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 10:58:23,603 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 10:58:23,603 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 10:58:23,603 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 10:58:23,603 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 10:58:23,603 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=48.05 KB heapSize=77.25 KB 2023-07-12 10:58:23,614 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 10:58:23,614 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 10:58:23,620 DEBUG [RS:0;jenkins-hbase9:42501] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:23,620 INFO [RS:0;jenkins-hbase9:42501] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C42501%2C1689159484335:(num 1689159486868) 2023-07-12 10:58:23,620 DEBUG [RS:0;jenkins-hbase9:42501] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:23,620 INFO [RS:0;jenkins-hbase9:42501] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:23,626 INFO [RS:0;jenkins-hbase9:42501] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:23,627 INFO [RS:0;jenkins-hbase9:42501] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:23,627 INFO [RS:0;jenkins-hbase9:42501] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:23,627 INFO [RS:0;jenkins-hbase9:42501] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:23,627 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:23,628 INFO [RS:0;jenkins-hbase9:42501] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:42501 2023-07-12 10:58:23,634 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.58 KB at sequenceid=79 (bloomFilter=true), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/a46aa6de7a2d409792a23a50cbb46fc7 2023-07-12 10:58:23,639 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:23,639 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:23,639 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:23,639 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:23,647 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=42.59 KB at sequenceid=156 (bloomFilter=false), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/info/f03719a871834b2389c705f4609fdcac 2023-07-12 10:58:23,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a46aa6de7a2d409792a23a50cbb46fc7 2023-07-12 10:58:23,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/a46aa6de7a2d409792a23a50cbb46fc7 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/a46aa6de7a2d409792a23a50cbb46fc7 2023-07-12 10:58:23,653 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f03719a871834b2389c705f4609fdcac 2023-07-12 10:58:23,659 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a46aa6de7a2d409792a23a50cbb46fc7 2023-07-12 10:58:23,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/a46aa6de7a2d409792a23a50cbb46fc7, entries=14, sequenceid=79, filesize=5.5 K 2023-07-12 10:58:23,661 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.58 KB/9808, heapSize ~15.66 KB/16032, currentSize=0 B/0 for 0832c48321f808d3b4d6fb68605b1448 in 60ms, sequenceid=79, compaction requested=true 2023-07-12 10:58:23,670 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 10:58:23,671 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 10:58:23,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/recovered.edits/82.seqid, newMaxSeqId=82, maxSeqId=40 2023-07-12 10:58:23,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:23,678 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:23,678 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:23,678 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:23,680 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.73 KB at sequenceid=156 (bloomFilter=false), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/rep_barrier/0ce0a95344df49e981f2d4e98996f0d6 2023-07-12 10:58:23,685 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0ce0a95344df49e981f2d4e98996f0d6 2023-07-12 10:58:23,696 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.73 KB at sequenceid=156 (bloomFilter=false), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/table/3d9afdddc7e1488ba950023ac0c57891 2023-07-12 10:58:23,701 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3d9afdddc7e1488ba950023ac0c57891 2023-07-12 10:58:23,702 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/info/f03719a871834b2389c705f4609fdcac as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/f03719a871834b2389c705f4609fdcac 2023-07-12 10:58:23,703 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:23,703 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:23,703 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:23,704 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:23,703 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:23,703 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,42501,1689159484335 2023-07-12 10:58:23,703 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:23,704 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:23,704 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:23,707 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f03719a871834b2389c705f4609fdcac 2023-07-12 10:58:23,708 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/f03719a871834b2389c705f4609fdcac, entries=62, sequenceid=156, filesize=11.7 K 2023-07-12 10:58:23,708 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/rep_barrier/0ce0a95344df49e981f2d4e98996f0d6 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier/0ce0a95344df49e981f2d4e98996f0d6 2023-07-12 10:58:23,714 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0ce0a95344df49e981f2d4e98996f0d6 2023-07-12 10:58:23,714 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier/0ce0a95344df49e981f2d4e98996f0d6, entries=16, sequenceid=156, filesize=6.7 K 2023-07-12 10:58:23,715 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/table/3d9afdddc7e1488ba950023ac0c57891 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/3d9afdddc7e1488ba950023ac0c57891 2023-07-12 10:58:23,720 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3d9afdddc7e1488ba950023ac0c57891 2023-07-12 10:58:23,720 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/3d9afdddc7e1488ba950023ac0c57891, entries=23, sequenceid=156, filesize=7.0 K 2023-07-12 10:58:23,721 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~48.05 KB/49206, heapSize ~77.20 KB/79056, currentSize=0 B/0 for 1588230740 in 118ms, sequenceid=156, compaction requested=false 2023-07-12 10:58:23,731 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/recovered.edits/159.seqid, newMaxSeqId=159, maxSeqId=19 2023-07-12 10:58:23,731 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:23,732 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:23,732 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 10:58:23,732 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:23,800 INFO [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,45597,1689159484713; all regions closed. 2023-07-12 10:58:23,802 DEBUG [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1504): Waiting on e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:23,803 INFO [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,43117,1689159488336; all regions closed. 2023-07-12 10:58:23,806 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,42501,1689159484335] 2023-07-12 10:58:23,806 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,42501,1689159484335; numProcessing=1 2023-07-12 10:58:23,807 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,42501,1689159484335 already deleted, retry=false 2023-07-12 10:58:23,807 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,42501,1689159484335 expired; onlineServers=3 2023-07-12 10:58:23,812 DEBUG [RS:2;jenkins-hbase9:45597] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:23,813 INFO [RS:2;jenkins-hbase9:45597] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C45597%2C1689159484713:(num 1689159486863) 2023-07-12 10:58:23,813 DEBUG [RS:2;jenkins-hbase9:45597] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:23,813 INFO [RS:2;jenkins-hbase9:45597] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:23,814 INFO [RS:2;jenkins-hbase9:45597] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:23,814 INFO [RS:2;jenkins-hbase9:45597] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:23,815 INFO [RS:2;jenkins-hbase9:45597] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:23,815 INFO [RS:2;jenkins-hbase9:45597] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:23,815 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:23,816 INFO [RS:2;jenkins-hbase9:45597] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:45597 2023-07-12 10:58:23,818 DEBUG [RS:3;jenkins-hbase9:43117] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:23,818 INFO [RS:3;jenkins-hbase9:43117] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C43117%2C1689159488336.meta:.meta(num 1689159489494) 2023-07-12 10:58:23,819 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:23,819 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:23,819 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:23,819 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,45597,1689159484713 2023-07-12 10:58:23,824 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,45597,1689159484713] 2023-07-12 10:58:23,824 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,45597,1689159484713; numProcessing=2 2023-07-12 10:58:23,825 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,45597,1689159484713 already deleted, retry=false 2023-07-12 10:58:23,825 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,45597,1689159484713 expired; onlineServers=2 2023-07-12 10:58:23,827 DEBUG [RS:3;jenkins-hbase9:43117] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:23,827 INFO [RS:3;jenkins-hbase9:43117] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C43117%2C1689159488336:(num 1689159488747) 2023-07-12 10:58:23,827 DEBUG [RS:3;jenkins-hbase9:43117] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:23,827 INFO [RS:3;jenkins-hbase9:43117] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:23,828 INFO [RS:3;jenkins-hbase9:43117] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:23,828 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:23,829 INFO [RS:3;jenkins-hbase9:43117] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:43117 2023-07-12 10:58:23,831 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:23,831 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,43117,1689159488336 2023-07-12 10:58:23,831 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:23,835 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,43117,1689159488336] 2023-07-12 10:58:23,835 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,43117,1689159488336; numProcessing=3 2023-07-12 10:58:23,836 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,43117,1689159488336 already deleted, retry=false 2023-07-12 10:58:23,836 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,43117,1689159488336 expired; onlineServers=1 2023-07-12 10:58:24,002 DEBUG [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1504): Waiting on e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:24,049 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=23 (bloomFilter=true), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/.tmp/info/f991ed8007c04dac837a7f0bdde5ce19 2023-07-12 10:58:24,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f991ed8007c04dac837a7f0bdde5ce19 2023-07-12 10:58:24,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/.tmp/info/f991ed8007c04dac837a7f0bdde5ce19 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/f991ed8007c04dac837a7f0bdde5ce19 2023-07-12 10:58:24,063 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f991ed8007c04dac837a7f0bdde5ce19 2023-07-12 10:58:24,063 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/f991ed8007c04dac837a7f0bdde5ce19, entries=2, sequenceid=23, filesize=4.9 K 2023-07-12 10:58:24,064 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for e5addb24bba6e8be9d4cddc12a45ff25 in 462ms, sequenceid=23, compaction requested=true 2023-07-12 10:58:24,064 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 10:58:24,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/recovered.edits/26.seqid, newMaxSeqId=26, maxSeqId=16 2023-07-12 10:58:24,073 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:24,073 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:24,073 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:24,077 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:24,077 INFO [RS:3;jenkins-hbase9:43117] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,43117,1689159488336; zookeeper connection closed. 2023-07-12 10:58:24,077 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43117-0x1015920fb08000b, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:24,077 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@444a6c33] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@444a6c33 2023-07-12 10:58:24,177 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:24,177 INFO [RS:2;jenkins-hbase9:45597] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,45597,1689159484713; zookeeper connection closed. 2023-07-12 10:58:24,177 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:45597-0x1015920fb080003, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:24,178 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@39665ba5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@39665ba5 2023-07-12 10:58:24,203 INFO [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,43635,1689159491271; all regions closed. 2023-07-12 10:58:24,212 DEBUG [RS:4;jenkins-hbase9:43635] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:24,212 INFO [RS:4;jenkins-hbase9:43635] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C43635%2C1689159491271:(num 1689159491635) 2023-07-12 10:58:24,212 DEBUG [RS:4;jenkins-hbase9:43635] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:24,212 INFO [RS:4;jenkins-hbase9:43635] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:24,212 INFO [RS:4;jenkins-hbase9:43635] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:24,212 INFO [RS:4;jenkins-hbase9:43635] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:24,212 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:24,212 INFO [RS:4;jenkins-hbase9:43635] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:24,212 INFO [RS:4;jenkins-hbase9:43635] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:24,213 INFO [RS:4;jenkins-hbase9:43635] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:43635 2023-07-12 10:58:24,215 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,43635,1689159491271 2023-07-12 10:58:24,215 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:24,216 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,43635,1689159491271] 2023-07-12 10:58:24,216 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,43635,1689159491271; numProcessing=4 2023-07-12 10:58:24,217 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,43635,1689159491271 already deleted, retry=false 2023-07-12 10:58:24,217 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,43635,1689159491271 expired; onlineServers=0 2023-07-12 10:58:24,217 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,41017,1689159482181' ***** 2023-07-12 10:58:24,217 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 10:58:24,218 DEBUG [M:0;jenkins-hbase9:41017] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@b7603f3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:24,218 INFO [M:0;jenkins-hbase9:41017] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:24,220 INFO [M:0;jenkins-hbase9:41017] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@18ce6625{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 10:58:24,221 INFO [M:0;jenkins-hbase9:41017] server.AbstractConnector(383): Stopped ServerConnector@79ad9a6d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:24,221 INFO [M:0;jenkins-hbase9:41017] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:24,221 INFO [M:0;jenkins-hbase9:41017] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1c693181{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:24,222 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:24,222 INFO [M:0;jenkins-hbase9:41017] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7b825da1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:24,222 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:24,222 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:24,222 INFO [M:0;jenkins-hbase9:41017] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,41017,1689159482181 2023-07-12 10:58:24,223 INFO [M:0;jenkins-hbase9:41017] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,41017,1689159482181; all regions closed. 2023-07-12 10:58:24,223 DEBUG [M:0;jenkins-hbase9:41017] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:24,223 INFO [M:0;jenkins-hbase9:41017] master.HMaster(1491): Stopping master jetty server 2023-07-12 10:58:24,223 INFO [M:0;jenkins-hbase9:41017] server.AbstractConnector(383): Stopped ServerConnector@8d15999{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:24,224 DEBUG [M:0;jenkins-hbase9:41017] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 10:58:24,224 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 10:58:24,224 DEBUG [M:0;jenkins-hbase9:41017] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 10:58:24,224 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159486393] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159486393,5,FailOnTimeoutGroup] 2023-07-12 10:58:24,224 INFO [M:0;jenkins-hbase9:41017] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 10:58:24,224 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159486396] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159486396,5,FailOnTimeoutGroup] 2023-07-12 10:58:24,224 INFO [M:0;jenkins-hbase9:41017] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 10:58:24,224 INFO [M:0;jenkins-hbase9:41017] hbase.ChoreService(369): Chore service for: master/jenkins-hbase9:0 had [] on shutdown 2023-07-12 10:58:24,224 DEBUG [M:0;jenkins-hbase9:41017] master.HMaster(1512): Stopping service threads 2023-07-12 10:58:24,225 INFO [M:0;jenkins-hbase9:41017] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 10:58:24,225 ERROR [M:0;jenkins-hbase9:41017] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-12 10:58:24,226 INFO [M:0;jenkins-hbase9:41017] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 10:58:24,226 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 10:58:24,226 DEBUG [M:0;jenkins-hbase9:41017] zookeeper.ZKUtil(398): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 10:58:24,226 WARN [M:0;jenkins-hbase9:41017] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 10:58:24,226 INFO [M:0;jenkins-hbase9:41017] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 10:58:24,226 INFO [M:0;jenkins-hbase9:41017] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 10:58:24,226 DEBUG [M:0;jenkins-hbase9:41017] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 10:58:24,226 INFO [M:0;jenkins-hbase9:41017] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:24,226 DEBUG [M:0;jenkins-hbase9:41017] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:24,226 DEBUG [M:0;jenkins-hbase9:41017] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 10:58:24,226 DEBUG [M:0;jenkins-hbase9:41017] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:24,227 INFO [M:0;jenkins-hbase9:41017] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=402.26 KB heapSize=480.15 KB 2023-07-12 10:58:24,242 INFO [M:0;jenkins-hbase9:41017] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=402.26 KB at sequenceid=892 (bloomFilter=true), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2e5db07848e94ff6bf97226c840d95d8 2023-07-12 10:58:24,248 DEBUG [M:0;jenkins-hbase9:41017] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2e5db07848e94ff6bf97226c840d95d8 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2e5db07848e94ff6bf97226c840d95d8 2023-07-12 10:58:24,253 INFO [M:0;jenkins-hbase9:41017] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2e5db07848e94ff6bf97226c840d95d8, entries=120, sequenceid=892, filesize=27.1 K 2023-07-12 10:58:24,254 INFO [M:0;jenkins-hbase9:41017] regionserver.HRegion(2948): Finished flush of dataSize ~402.26 KB/411917, heapSize ~480.13 KB/491656, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=892, compaction requested=false 2023-07-12 10:58:24,257 INFO [M:0;jenkins-hbase9:41017] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:24,257 DEBUG [M:0;jenkins-hbase9:41017] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:24,262 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:24,262 INFO [M:0;jenkins-hbase9:41017] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 10:58:24,262 INFO [M:0;jenkins-hbase9:41017] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:41017 2023-07-12 10:58:24,264 DEBUG [M:0;jenkins-hbase9:41017] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase9.apache.org,41017,1689159482181 already deleted, retry=false 2023-07-12 10:58:24,277 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:24,278 INFO [RS:0;jenkins-hbase9:42501] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,42501,1689159484335; zookeeper connection closed. 2023-07-12 10:58:24,278 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:42501-0x1015920fb080001, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:24,278 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@79ea0e60] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@79ea0e60 2023-07-12 10:58:24,378 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:24,378 INFO [M:0;jenkins-hbase9:41017] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,41017,1689159482181; zookeeper connection closed. 2023-07-12 10:58:24,378 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:41017-0x1015920fb080000, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:24,478 INFO [RS:4;jenkins-hbase9:43635] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,43635,1689159491271; zookeeper connection closed. 2023-07-12 10:58:24,478 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:24,478 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:43635-0x1015920fb08000d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:24,478 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4ea4e79a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4ea4e79a 2023-07-12 10:58:24,479 INFO [Listener at localhost/44831] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-12 10:58:24,479 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-12 10:58:26,480 DEBUG [Listener at localhost/44831] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 10:58:26,480 DEBUG [Listener at localhost/44831] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 10:58:26,480 DEBUG [Listener at localhost/44831] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 10:58:26,480 DEBUG [Listener at localhost/44831] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 10:58:26,481 INFO [Listener at localhost/44831] client.ConnectionUtils(127): master/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:26,482 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:26,482 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:26,482 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:26,482 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:26,482 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:26,483 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:26,484 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:43835 2023-07-12 10:58:26,485 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:26,487 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:26,488 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43835 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:26,491 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:438350x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:26,492 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43835-0x1015920fb080010 connected 2023-07-12 10:58:26,494 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:26,495 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:26,495 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:26,496 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43835 2023-07-12 10:58:26,496 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43835 2023-07-12 10:58:26,496 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43835 2023-07-12 10:58:26,497 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43835 2023-07-12 10:58:26,497 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43835 2023-07-12 10:58:26,499 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:26,499 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:26,499 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:26,499 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 10:58:26,499 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:26,499 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:26,500 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:26,501 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 45899 2023-07-12 10:58:26,501 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:26,505 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:26,505 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@68b47818{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:26,505 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:26,505 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@702e03cc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:26,622 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:26,623 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:26,623 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:26,624 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:26,625 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:26,626 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4ca333bb{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-45899-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3787521638618767902/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 10:58:26,628 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@7c04e45c{HTTP/1.1, (http/1.1)}{0.0.0.0:45899} 2023-07-12 10:58:26,628 INFO [Listener at localhost/44831] server.Server(415): Started @30320ms 2023-07-12 10:58:26,628 INFO [Listener at localhost/44831] master.HMaster(444): hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5, hbase.cluster.distributed=false 2023-07-12 10:58:26,630 DEBUG [pool-353-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-12 10:58:26,648 INFO [Listener at localhost/44831] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:26,648 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:26,649 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:26,649 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:26,649 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:26,649 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:26,649 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:26,652 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:34455 2023-07-12 10:58:26,653 INFO [Listener at localhost/44831] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:26,654 DEBUG [Listener at localhost/44831] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:26,655 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:26,656 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:26,657 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34455 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:26,672 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:344550x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:26,672 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:344550x0, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:26,673 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:344550x0, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:26,675 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34455-0x1015920fb080011 connected 2023-07-12 10:58:26,678 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:26,685 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34455 2023-07-12 10:58:26,690 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34455 2023-07-12 10:58:26,697 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34455 2023-07-12 10:58:26,698 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34455 2023-07-12 10:58:26,698 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34455 2023-07-12 10:58:26,701 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:26,701 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:26,702 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:26,702 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:26,703 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:26,703 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:26,703 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:26,704 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 43989 2023-07-12 10:58:26,704 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:26,708 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:26,709 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6047944f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:26,709 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:26,709 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@14fd5ddc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:26,838 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:26,839 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:26,839 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:26,839 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:26,840 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:26,841 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7898a2c5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-43989-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1246450110181215052/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:26,844 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@58e288c{HTTP/1.1, (http/1.1)}{0.0.0.0:43989} 2023-07-12 10:58:26,845 INFO [Listener at localhost/44831] server.Server(415): Started @30537ms 2023-07-12 10:58:26,858 INFO [Listener at localhost/44831] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:26,859 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:26,859 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:26,859 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:26,859 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:26,859 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:26,859 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:26,860 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:33873 2023-07-12 10:58:26,860 INFO [Listener at localhost/44831] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:26,861 DEBUG [Listener at localhost/44831] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:26,862 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:26,863 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:26,863 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33873 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:26,866 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:338730x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:26,868 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:338730x0, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:26,868 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33873-0x1015920fb080012 connected 2023-07-12 10:58:26,869 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:26,870 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:26,871 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33873 2023-07-12 10:58:26,873 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33873 2023-07-12 10:58:26,874 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33873 2023-07-12 10:58:26,874 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33873 2023-07-12 10:58:26,875 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33873 2023-07-12 10:58:26,876 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:26,877 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:26,877 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:26,877 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:26,877 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:26,877 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:26,877 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:26,878 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 32845 2023-07-12 10:58:26,878 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:26,881 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:26,882 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@201b69f1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:26,882 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:26,882 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@cd3c0c4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:27,003 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:27,004 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:27,004 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:27,004 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:58:27,006 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:27,007 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@fc6a1d4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-32845-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1052886365796060609/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:27,013 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@48d69e11{HTTP/1.1, (http/1.1)}{0.0.0.0:32845} 2023-07-12 10:58:27,013 INFO [Listener at localhost/44831] server.Server(415): Started @30706ms 2023-07-12 10:58:27,027 INFO [Listener at localhost/44831] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:27,027 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:27,027 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:27,027 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:27,027 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:27,027 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:27,027 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:27,028 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:41887 2023-07-12 10:58:27,028 INFO [Listener at localhost/44831] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:27,030 DEBUG [Listener at localhost/44831] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:27,031 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:27,032 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:27,032 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41887 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:27,037 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:418870x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:27,038 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:418870x0, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:27,038 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41887-0x1015920fb080013 connected 2023-07-12 10:58:27,039 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:27,040 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:27,040 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41887 2023-07-12 10:58:27,040 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41887 2023-07-12 10:58:27,041 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41887 2023-07-12 10:58:27,046 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41887 2023-07-12 10:58:27,046 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41887 2023-07-12 10:58:27,048 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:27,048 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:27,048 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:27,049 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:27,049 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:27,049 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:27,049 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:27,050 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 42151 2023-07-12 10:58:27,050 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:27,055 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:27,055 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@63cd7f67{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:27,056 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:27,056 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3acbc147{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:27,173 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:27,174 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:27,174 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:27,174 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:58:27,175 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:27,176 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2e5f2415{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-42151-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7202155896003478824/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:27,179 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@1c1fe29d{HTTP/1.1, (http/1.1)}{0.0.0.0:42151} 2023-07-12 10:58:27,179 INFO [Listener at localhost/44831] server.Server(415): Started @30871ms 2023-07-12 10:58:27,182 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:27,189 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@f035965{HTTP/1.1, (http/1.1)}{0.0.0.0:45091} 2023-07-12 10:58:27,189 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(415): Started @30881ms 2023-07-12 10:58:27,189 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase9.apache.org,43835,1689159506481 2023-07-12 10:58:27,191 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 10:58:27,191 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase9.apache.org,43835,1689159506481 2023-07-12 10:58:27,194 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:27,194 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:27,194 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:27,194 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:27,195 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:27,195 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:27,203 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase9.apache.org,43835,1689159506481 from backup master directory 2023-07-12 10:58:27,203 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:27,204 WARN [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:27,204 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase9.apache.org,43835,1689159506481 2023-07-12 10:58:27,204 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase9.apache.org,43835,1689159506481 2023-07-12 10:58:27,204 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 10:58:27,217 INFO [master/jenkins-hbase9:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:27,242 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4207495e to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:27,248 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d21cf9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:27,249 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:27,249 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 10:58:27,249 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:27,256 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,41017,1689159482181 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,41017,1689159482181-dead as it is dead 2023-07-12 10:58:27,258 INFO [master/jenkins-hbase9:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,41017,1689159482181-dead/jenkins-hbase9.apache.org%2C41017%2C1689159482181.1689159485560 2023-07-12 10:58:27,262 INFO [master/jenkins-hbase9:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,41017,1689159482181-dead/jenkins-hbase9.apache.org%2C41017%2C1689159482181.1689159485560 after 4ms 2023-07-12 10:58:27,263 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,41017,1689159482181-dead/jenkins-hbase9.apache.org%2C41017%2C1689159482181.1689159485560 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase9.apache.org%2C41017%2C1689159482181.1689159485560 2023-07-12 10:58:27,263 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,41017,1689159482181-dead 2023-07-12 10:58:27,263 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,43835,1689159506481 2023-07-12 10:58:27,266 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C43835%2C1689159506481, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,43835,1689159506481, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/oldWALs, maxLogs=10 2023-07-12 10:58:27,286 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:27,287 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:27,287 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:27,290 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,43835,1689159506481/jenkins-hbase9.apache.org%2C43835%2C1689159506481.1689159507266 2023-07-12 10:58:27,290 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK]] 2023-07-12 10:58:27,290 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:27,290 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:27,290 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:27,290 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:27,292 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:27,293 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 10:58:27,294 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 10:58:27,299 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2e5db07848e94ff6bf97226c840d95d8 2023-07-12 10:58:27,299 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:27,300 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-12 10:58:27,300 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase9.apache.org%2C41017%2C1689159482181.1689159485560 2023-07-12 10:58:27,328 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 1051, firstSequenceIdInLog=3, maxSequenceIdInLog=894, path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase9.apache.org%2C41017%2C1689159482181.1689159485560 2023-07-12 10:58:27,329 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase9.apache.org%2C41017%2C1689159482181.1689159485560 2023-07-12 10:58:27,332 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:27,334 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/894.seqid, newMaxSeqId=894, maxSeqId=1 2023-07-12 10:58:27,335 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=895; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10372949600, jitterRate=-0.0339437872171402}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:27,335 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:27,335 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 10:58:27,336 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 10:58:27,336 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 10:58:27,336 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 10:58:27,337 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 10:58:27,345 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-12 10:58:27,345 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-12 10:58:27,346 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-12 10:58:27,346 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-12 10:58:27,346 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-12 10:58:27,346 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE 2023-07-12 10:58:27,346 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE 2023-07-12 10:58:27,347 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=14, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-12 10:58:27,347 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=21, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,39623,1689159484526, splitWal=true, meta=false 2023-07-12 10:58:27,347 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=22, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-12 10:58:27,347 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=23, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:27,348 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=26, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:27,348 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=29, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:27,348 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=30, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:27,348 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=51, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:27,349 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=72, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:27,349 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=73, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE 2023-07-12 10:58:27,349 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=74, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE 2023-07-12 10:58:27,349 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=79, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:27,349 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=80, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:27,350 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=83, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:27,350 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=86, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:27,350 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=87, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:27,350 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=88, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:27,350 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=91, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:27,350 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=94, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:27,351 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=95, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:27,351 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=98, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-12 10:58:27,351 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=99, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-12 10:58:27,352 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=100, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689159500013 type: FLUSH version: 2 ttl: 0 ) 2023-07-12 10:58:27,352 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=103, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:27,352 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=106, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:27,352 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-12 10:58:27,352 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=110, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-12 10:58:27,352 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=111, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-12 10:58:27,353 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=112, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:27,353 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=113, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:27,353 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=116, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:27,353 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=119, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:27,354 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=120, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-12 10:58:27,354 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 16 msec 2023-07-12 10:58:27,354 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 10:58:27,358 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-12 10:58:27,358 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase9.apache.org,43117,1689159488336, table=hbase:meta, region=1588230740 2023-07-12 10:58:27,360 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 4 possibly 'live' servers, and 0 'splitting'. 2023-07-12 10:58:27,365 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,45597,1689159484713 already deleted, retry=false 2023-07-12 10:58:27,366 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase9.apache.org,45597,1689159484713 on jenkins-hbase9.apache.org,43835,1689159506481 2023-07-12 10:58:27,367 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=121, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase9.apache.org,45597,1689159484713, splitWal=true, meta=false 2023-07-12 10:58:27,367 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=121 for jenkins-hbase9.apache.org,45597,1689159484713 (carryingMeta=false) jenkins-hbase9.apache.org,45597,1689159484713/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@4513ee37[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-12 10:58:27,368 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,43635,1689159491271 already deleted, retry=false 2023-07-12 10:58:27,368 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase9.apache.org,43635,1689159491271 on jenkins-hbase9.apache.org,43835,1689159506481 2023-07-12 10:58:27,369 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=122, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase9.apache.org,43635,1689159491271, splitWal=true, meta=false 2023-07-12 10:58:27,369 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=122 for jenkins-hbase9.apache.org,43635,1689159491271 (carryingMeta=false) jenkins-hbase9.apache.org,43635,1689159491271/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@261d2b6d[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-12 10:58:27,370 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,43117,1689159488336 already deleted, retry=false 2023-07-12 10:58:27,371 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase9.apache.org,43117,1689159488336 on jenkins-hbase9.apache.org,43835,1689159506481 2023-07-12 10:58:27,378 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase9.apache.org,43117,1689159488336, splitWal=true, meta=true 2023-07-12 10:58:27,378 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=123 for jenkins-hbase9.apache.org,43117,1689159488336 (carryingMeta=true) jenkins-hbase9.apache.org,43117,1689159488336/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@6249a4ec[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-12 10:58:27,386 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,42501,1689159484335 already deleted, retry=false 2023-07-12 10:58:27,386 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase9.apache.org,42501,1689159484335 on jenkins-hbase9.apache.org,43835,1689159506481 2023-07-12 10:58:27,387 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=124, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase9.apache.org,42501,1689159484335, splitWal=true, meta=false 2023-07-12 10:58:27,387 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=124 for jenkins-hbase9.apache.org,42501,1689159484335 (carryingMeta=false) jenkins-hbase9.apache.org,42501,1689159484335/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@2c05ec45[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-12 10:58:27,388 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-12 10:58:27,388 INFO [master/jenkins-hbase9:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 10:58:27,389 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 10:58:27,389 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 10:58:27,390 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 10:58:27,391 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 10:58:27,392 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:27,392 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:27,392 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:27,393 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:27,392 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:27,393 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase9.apache.org,43835,1689159506481, sessionid=0x1015920fb080010, setting cluster-up flag (Was=false) 2023-07-12 10:58:27,396 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 10:58:27,397 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,43835,1689159506481 2023-07-12 10:58:27,399 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 10:58:27,399 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,43835,1689159506481 2023-07-12 10:58:27,403 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 10:58:27,403 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 10:58:27,404 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-12 10:58:27,404 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 10:58:27,405 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43835,1689159506481] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:27,405 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 10:58:27,405 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-12 10:58:27,411 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:27,412 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase9.apache.org/172.31.2.10:43117 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:43117 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:27,413 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase9.apache.org/172.31.2.10:43117 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:43117 2023-07-12 10:58:27,420 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 10:58:27,421 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 10:58:27,421 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 10:58:27,421 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 10:58:27,421 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:27,421 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:27,421 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:27,421 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:27,421 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase9:0, corePoolSize=10, maxPoolSize=10 2023-07-12 10:58:27,421 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,421 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:27,421 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,424 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689159537424 2023-07-12 10:58:27,424 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 10:58:27,425 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 10:58:27,425 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 10:58:27,425 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 10:58:27,425 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 10:58:27,426 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 10:58:27,431 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase9.apache.org,43117,1689159488336; numProcessing=1 2023-07-12 10:58:27,431 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,431 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase9.apache.org,45597,1689159484713; numProcessing=2 2023-07-12 10:58:27,431 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=121, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,45597,1689159484713, splitWal=true, meta=false 2023-07-12 10:58:27,431 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=123, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,43117,1689159488336, splitWal=true, meta=true 2023-07-12 10:58:27,432 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 10:58:27,432 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 10:58:27,432 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 10:58:27,432 DEBUG [PEWorker-4] master.DeadServer(103): Processing jenkins-hbase9.apache.org,42501,1689159484335; numProcessing=3 2023-07-12 10:58:27,432 DEBUG [PEWorker-3] master.DeadServer(103): Processing jenkins-hbase9.apache.org,43635,1689159491271; numProcessing=4 2023-07-12 10:58:27,432 INFO [PEWorker-4] procedure.ServerCrashProcedure(161): Start pid=124, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,42501,1689159484335, splitWal=true, meta=false 2023-07-12 10:58:27,432 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=122, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,43635,1689159491271, splitWal=true, meta=false 2023-07-12 10:58:27,432 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=123, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,43117,1689159488336, splitWal=true, meta=true, isMeta: true 2023-07-12 10:58:27,432 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 10:58:27,433 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 10:58:27,433 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159507433,5,FailOnTimeoutGroup] 2023-07-12 10:58:27,433 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159507433,5,FailOnTimeoutGroup] 2023-07-12 10:58:27,433 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,434 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 10:58:27,434 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,434 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,434 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689159507434, completionTime=-1 2023-07-12 10:58:27,434 WARN [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-12 10:58:27,434 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-12 10:58:27,435 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43117,1689159488336-splitting 2023-07-12 10:58:27,436 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43117,1689159488336-splitting dir is empty, no logs to split. 2023-07-12 10:58:27,436 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase9.apache.org,43117,1689159488336 WAL count=0, meta=true 2023-07-12 10:58:27,438 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43117,1689159488336-splitting dir is empty, no logs to split. 2023-07-12 10:58:27,438 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase9.apache.org,43117,1689159488336 WAL count=0, meta=true 2023-07-12 10:58:27,438 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase9.apache.org,43117,1689159488336 WAL splitting is done? wals=0, meta=true 2023-07-12 10:58:27,438 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 10:58:27,439 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=125, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 10:58:27,441 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=125, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-12 10:58:27,481 INFO [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(951): ClusterId : 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:27,481 INFO [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(951): ClusterId : 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:27,483 DEBUG [RS:0;jenkins-hbase9:34455] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:27,481 INFO [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(951): ClusterId : 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:27,485 DEBUG [RS:2;jenkins-hbase9:41887] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:27,484 DEBUG [RS:1;jenkins-hbase9:33873] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:27,487 DEBUG [RS:0;jenkins-hbase9:34455] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:27,488 DEBUG [RS:0;jenkins-hbase9:34455] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:27,488 DEBUG [RS:2;jenkins-hbase9:41887] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:27,488 DEBUG [RS:2;jenkins-hbase9:41887] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:27,489 DEBUG [RS:0;jenkins-hbase9:34455] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:27,490 DEBUG [RS:2;jenkins-hbase9:41887] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:27,493 DEBUG [RS:0;jenkins-hbase9:34455] zookeeper.ReadOnlyZKClient(139): Connect 0x3e70f1bf to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:27,493 DEBUG [RS:2;jenkins-hbase9:41887] zookeeper.ReadOnlyZKClient(139): Connect 0x2bbf94ff to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:27,502 DEBUG [RS:1;jenkins-hbase9:33873] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:27,502 DEBUG [RS:1;jenkins-hbase9:33873] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:27,515 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase9.apache.org/172.31.2.10:43117 this server is in the failed servers list 2023-07-12 10:58:27,537 DEBUG [RS:1;jenkins-hbase9:33873] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:27,540 DEBUG [RS:1;jenkins-hbase9:33873] zookeeper.ReadOnlyZKClient(139): Connect 0x693f06b5 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:27,540 DEBUG [RS:0;jenkins-hbase9:34455] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@30b70801, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:27,541 DEBUG [RS:0;jenkins-hbase9:34455] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@34889acf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:27,548 DEBUG [RS:2;jenkins-hbase9:41887] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25786ff8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:27,548 DEBUG [RS:2;jenkins-hbase9:41887] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2147ff8f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:27,562 DEBUG [RS:2;jenkins-hbase9:41887] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase9:41887 2023-07-12 10:58:27,562 INFO [RS:2;jenkins-hbase9:41887] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:27,562 INFO [RS:2;jenkins-hbase9:41887] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:27,562 DEBUG [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:27,563 INFO [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,43835,1689159506481 with isa=jenkins-hbase9.apache.org/172.31.2.10:41887, startcode=1689159507026 2023-07-12 10:58:27,563 DEBUG [RS:2;jenkins-hbase9:41887] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:27,565 DEBUG [RS:0;jenkins-hbase9:34455] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase9:34455 2023-07-12 10:58:27,565 INFO [RS:0;jenkins-hbase9:34455] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:27,565 INFO [RS:0;jenkins-hbase9:34455] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:27,565 DEBUG [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:27,573 INFO [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,43835,1689159506481 with isa=jenkins-hbase9.apache.org/172.31.2.10:34455, startcode=1689159506648 2023-07-12 10:58:27,573 DEBUG [RS:0;jenkins-hbase9:34455] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:27,574 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:33977, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:27,575 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43835] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:27,575 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43835,1689159506481] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:27,575 DEBUG [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:27,575 DEBUG [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42757 2023-07-12 10:58:27,575 DEBUG [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45899 2023-07-12 10:58:27,579 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43835,1689159506481] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 10:58:27,579 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:59323, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:27,580 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43835] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:27,580 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43835,1689159506481] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:27,580 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43835,1689159506481] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 10:58:27,580 DEBUG [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:27,580 DEBUG [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42757 2023-07-12 10:58:27,580 DEBUG [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45899 2023-07-12 10:58:27,585 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=2; waited=151ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-12 10:58:27,586 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:27,586 DEBUG [RS:1;jenkins-hbase9:33873] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@67f8f5e2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:27,587 DEBUG [RS:1;jenkins-hbase9:33873] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@63c76dc5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:27,587 DEBUG [RS:2;jenkins-hbase9:41887] zookeeper.ZKUtil(162): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:27,587 WARN [RS:2;jenkins-hbase9:41887] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:27,587 INFO [RS:2;jenkins-hbase9:41887] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:27,588 DEBUG [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:27,589 DEBUG [RS:0;jenkins-hbase9:34455] zookeeper.ZKUtil(162): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:27,589 WARN [RS:0;jenkins-hbase9:34455] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:27,589 INFO [RS:0;jenkins-hbase9:34455] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:27,589 DEBUG [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:27,589 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,41887,1689159507026] 2023-07-12 10:58:27,589 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,34455,1689159506648] 2023-07-12 10:58:27,591 DEBUG [jenkins-hbase9:43835] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=2, allServersCount=2 2023-07-12 10:58:27,591 DEBUG [jenkins-hbase9:43835] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:27,591 DEBUG [jenkins-hbase9:43835] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:27,591 DEBUG [jenkins-hbase9:43835] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:27,591 DEBUG [jenkins-hbase9:43835] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:27,596 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,41887,1689159507026, state=OPENING 2023-07-12 10:58:27,598 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:58:27,598 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:27,598 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=126, ppid=125, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,41887,1689159507026}] 2023-07-12 10:58:27,607 DEBUG [RS:2;jenkins-hbase9:41887] zookeeper.ZKUtil(162): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:27,607 DEBUG [RS:2;jenkins-hbase9:41887] zookeeper.ZKUtil(162): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:27,608 DEBUG [RS:0;jenkins-hbase9:34455] zookeeper.ZKUtil(162): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:27,609 DEBUG [RS:0;jenkins-hbase9:34455] zookeeper.ZKUtil(162): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:27,610 DEBUG [RS:2;jenkins-hbase9:41887] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:27,610 DEBUG [RS:0;jenkins-hbase9:34455] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:27,611 INFO [RS:2;jenkins-hbase9:41887] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:27,611 INFO [RS:0;jenkins-hbase9:34455] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:27,614 INFO [RS:2;jenkins-hbase9:41887] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:27,614 INFO [RS:2;jenkins-hbase9:41887] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:27,614 INFO [RS:2;jenkins-hbase9:41887] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,615 DEBUG [RS:1;jenkins-hbase9:33873] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase9:33873 2023-07-12 10:58:27,615 INFO [RS:1;jenkins-hbase9:33873] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:27,615 INFO [RS:1;jenkins-hbase9:33873] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:27,615 DEBUG [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:27,616 INFO [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,43835,1689159506481 with isa=jenkins-hbase9.apache.org/172.31.2.10:33873, startcode=1689159506858 2023-07-12 10:58:27,616 DEBUG [RS:1;jenkins-hbase9:33873] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:27,617 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:46625, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:27,617 INFO [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:27,618 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43835] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:27,618 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43835,1689159506481] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:27,618 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43835,1689159506481] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 10:58:27,618 DEBUG [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:27,618 DEBUG [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42757 2023-07-12 10:58:27,618 DEBUG [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45899 2023-07-12 10:58:27,622 INFO [RS:0;jenkins-hbase9:34455] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:27,622 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:27,622 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:27,622 INFO [RS:0;jenkins-hbase9:34455] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:27,622 INFO [RS:0;jenkins-hbase9:34455] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,623 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:27,623 INFO [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:27,624 INFO [RS:2;jenkins-hbase9:41887] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,624 DEBUG [RS:1;jenkins-hbase9:33873] zookeeper.ZKUtil(162): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:27,624 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:27,624 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,33873,1689159506858] 2023-07-12 10:58:27,624 WARN [RS:1;jenkins-hbase9:33873] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:27,624 DEBUG [RS:2;jenkins-hbase9:41887] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,625 INFO [RS:1;jenkins-hbase9:33873] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:27,625 DEBUG [RS:2;jenkins-hbase9:41887] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,625 DEBUG [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:27,624 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:27,625 DEBUG [RS:2;jenkins-hbase9:41887] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,625 DEBUG [RS:2;jenkins-hbase9:41887] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,625 DEBUG [RS:2;jenkins-hbase9:41887] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,625 DEBUG [RS:2;jenkins-hbase9:41887] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:27,625 DEBUG [RS:2;jenkins-hbase9:41887] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,625 DEBUG [RS:2;jenkins-hbase9:41887] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,625 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:27,625 INFO [RS:0;jenkins-hbase9:34455] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,625 DEBUG [RS:2;jenkins-hbase9:41887] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,627 DEBUG [RS:0;jenkins-hbase9:34455] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,627 DEBUG [RS:2;jenkins-hbase9:41887] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,627 DEBUG [RS:0;jenkins-hbase9:34455] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,627 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:27,628 INFO [RS:2;jenkins-hbase9:41887] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,628 INFO [RS:2;jenkins-hbase9:41887] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,628 INFO [RS:2;jenkins-hbase9:41887] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,628 INFO [RS:2;jenkins-hbase9:41887] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,627 DEBUG [RS:0;jenkins-hbase9:34455] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,630 DEBUG [RS:0;jenkins-hbase9:34455] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,630 DEBUG [RS:0;jenkins-hbase9:34455] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,630 DEBUG [RS:0;jenkins-hbase9:34455] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:27,630 DEBUG [RS:0;jenkins-hbase9:34455] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,630 DEBUG [RS:0;jenkins-hbase9:34455] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,630 DEBUG [RS:0;jenkins-hbase9:34455] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,630 DEBUG [RS:0;jenkins-hbase9:34455] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,628 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:27,631 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:27,635 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=201ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-12 10:58:27,638 INFO [RS:0;jenkins-hbase9:34455] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,638 INFO [RS:0;jenkins-hbase9:34455] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,638 INFO [RS:0;jenkins-hbase9:34455] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,638 INFO [RS:0;jenkins-hbase9:34455] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,643 DEBUG [RS:1;jenkins-hbase9:33873] zookeeper.ZKUtil(162): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:27,644 DEBUG [RS:1;jenkins-hbase9:33873] zookeeper.ZKUtil(162): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:27,644 DEBUG [RS:1;jenkins-hbase9:33873] zookeeper.ZKUtil(162): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:27,647 INFO [RS:2;jenkins-hbase9:41887] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:27,647 INFO [RS:2;jenkins-hbase9:41887] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,41887,1689159507026-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,648 DEBUG [RS:1;jenkins-hbase9:33873] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:27,648 INFO [RS:1;jenkins-hbase9:33873] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:27,654 INFO [RS:1;jenkins-hbase9:33873] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:27,655 INFO [RS:0;jenkins-hbase9:34455] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:27,655 INFO [RS:0;jenkins-hbase9:34455] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,34455,1689159506648-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,657 INFO [RS:1;jenkins-hbase9:33873] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:27,657 INFO [RS:1;jenkins-hbase9:33873] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,657 INFO [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:27,659 INFO [RS:1;jenkins-hbase9:33873] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,659 DEBUG [RS:1;jenkins-hbase9:33873] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,660 DEBUG [RS:1;jenkins-hbase9:33873] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,660 DEBUG [RS:1;jenkins-hbase9:33873] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,660 DEBUG [RS:1;jenkins-hbase9:33873] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,660 DEBUG [RS:1;jenkins-hbase9:33873] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,660 DEBUG [RS:1;jenkins-hbase9:33873] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:27,660 DEBUG [RS:1;jenkins-hbase9:33873] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,660 DEBUG [RS:1;jenkins-hbase9:33873] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,660 DEBUG [RS:1;jenkins-hbase9:33873] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,660 DEBUG [RS:1;jenkins-hbase9:33873] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:27,662 INFO [RS:1;jenkins-hbase9:33873] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,662 INFO [RS:1;jenkins-hbase9:33873] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,663 INFO [RS:1;jenkins-hbase9:33873] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,663 INFO [RS:1;jenkins-hbase9:33873] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,665 INFO [RS:2;jenkins-hbase9:41887] regionserver.Replication(203): jenkins-hbase9.apache.org,41887,1689159507026 started 2023-07-12 10:58:27,665 INFO [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,41887,1689159507026, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:41887, sessionid=0x1015920fb080013 2023-07-12 10:58:27,665 DEBUG [RS:2;jenkins-hbase9:41887] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:27,665 DEBUG [RS:2;jenkins-hbase9:41887] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:27,665 DEBUG [RS:2;jenkins-hbase9:41887] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,41887,1689159507026' 2023-07-12 10:58:27,665 DEBUG [RS:2;jenkins-hbase9:41887] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:27,666 DEBUG [RS:2;jenkins-hbase9:41887] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:27,666 DEBUG [RS:2;jenkins-hbase9:41887] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:27,666 DEBUG [RS:2;jenkins-hbase9:41887] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:27,666 DEBUG [RS:2;jenkins-hbase9:41887] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:27,666 DEBUG [RS:2;jenkins-hbase9:41887] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,41887,1689159507026' 2023-07-12 10:58:27,666 DEBUG [RS:2;jenkins-hbase9:41887] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:27,667 DEBUG [RS:2;jenkins-hbase9:41887] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:27,667 DEBUG [RS:2;jenkins-hbase9:41887] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:27,667 INFO [RS:2;jenkins-hbase9:41887] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 10:58:27,670 INFO [RS:2;jenkins-hbase9:41887] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,670 DEBUG [RS:2;jenkins-hbase9:41887] zookeeper.ZKUtil(398): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 10:58:27,670 INFO [RS:2;jenkins-hbase9:41887] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 10:58:27,671 INFO [RS:2;jenkins-hbase9:41887] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,671 INFO [RS:2;jenkins-hbase9:41887] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,673 INFO [RS:0;jenkins-hbase9:34455] regionserver.Replication(203): jenkins-hbase9.apache.org,34455,1689159506648 started 2023-07-12 10:58:27,673 INFO [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,34455,1689159506648, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:34455, sessionid=0x1015920fb080011 2023-07-12 10:58:27,675 DEBUG [RS:0;jenkins-hbase9:34455] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:27,675 DEBUG [RS:0;jenkins-hbase9:34455] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:27,675 DEBUG [RS:0;jenkins-hbase9:34455] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,34455,1689159506648' 2023-07-12 10:58:27,675 DEBUG [RS:0;jenkins-hbase9:34455] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:27,676 DEBUG [RS:0;jenkins-hbase9:34455] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:27,676 DEBUG [RS:0;jenkins-hbase9:34455] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:27,676 DEBUG [RS:0;jenkins-hbase9:34455] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:27,676 DEBUG [RS:0;jenkins-hbase9:34455] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:27,676 DEBUG [RS:0;jenkins-hbase9:34455] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,34455,1689159506648' 2023-07-12 10:58:27,676 DEBUG [RS:0;jenkins-hbase9:34455] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:27,676 DEBUG [RS:0;jenkins-hbase9:34455] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:27,676 DEBUG [RS:0;jenkins-hbase9:34455] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:27,676 INFO [RS:0;jenkins-hbase9:34455] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 10:58:27,676 INFO [RS:0;jenkins-hbase9:34455] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,677 DEBUG [RS:0;jenkins-hbase9:34455] zookeeper.ZKUtil(398): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 10:58:27,677 INFO [RS:0;jenkins-hbase9:34455] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 10:58:27,677 INFO [RS:0;jenkins-hbase9:34455] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,677 INFO [RS:0;jenkins-hbase9:34455] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,680 INFO [RS:1;jenkins-hbase9:33873] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:27,680 INFO [RS:1;jenkins-hbase9:33873] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,33873,1689159506858-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,693 INFO [RS:1;jenkins-hbase9:33873] regionserver.Replication(203): jenkins-hbase9.apache.org,33873,1689159506858 started 2023-07-12 10:58:27,693 INFO [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,33873,1689159506858, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:33873, sessionid=0x1015920fb080012 2023-07-12 10:58:27,693 DEBUG [RS:1;jenkins-hbase9:33873] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:27,693 DEBUG [RS:1;jenkins-hbase9:33873] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:27,693 DEBUG [RS:1;jenkins-hbase9:33873] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,33873,1689159506858' 2023-07-12 10:58:27,693 DEBUG [RS:1;jenkins-hbase9:33873] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:27,694 DEBUG [RS:1;jenkins-hbase9:33873] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:27,694 DEBUG [RS:1;jenkins-hbase9:33873] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:27,695 DEBUG [RS:1;jenkins-hbase9:33873] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:27,695 DEBUG [RS:1;jenkins-hbase9:33873] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:27,695 DEBUG [RS:1;jenkins-hbase9:33873] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,33873,1689159506858' 2023-07-12 10:58:27,695 DEBUG [RS:1;jenkins-hbase9:33873] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:27,695 DEBUG [RS:1;jenkins-hbase9:33873] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:27,695 DEBUG [RS:1;jenkins-hbase9:33873] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:27,696 INFO [RS:1;jenkins-hbase9:33873] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 10:58:27,696 INFO [RS:1;jenkins-hbase9:33873] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,696 DEBUG [RS:1;jenkins-hbase9:33873] zookeeper.ZKUtil(398): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 10:58:27,696 INFO [RS:1;jenkins-hbase9:33873] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 10:58:27,696 INFO [RS:1;jenkins-hbase9:33873] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,696 INFO [RS:1;jenkins-hbase9:33873] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:27,716 WARN [ReadOnlyZKClient-127.0.0.1:49301@0x4207495e] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 10:58:27,716 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:27,718 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:38194, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:27,718 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41887] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 172.31.2.10:38194 deadline: 1689159567718, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:27,751 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:27,752 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:27,754 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:38196, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:27,758 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 10:58:27,758 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:27,760 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C41887%2C1689159507026.meta, suffix=.meta, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41887,1689159507026, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:27,774 INFO [RS:2;jenkins-hbase9:41887] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C41887%2C1689159507026, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41887,1689159507026, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:27,781 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:27,781 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:27,787 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:27,789 INFO [RS:0;jenkins-hbase9:34455] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C34455%2C1689159506648, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,34455,1689159506648, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:27,790 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41887,1689159507026/jenkins-hbase9.apache.org%2C41887%2C1689159507026.meta.1689159507760.meta 2023-07-12 10:58:27,790 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK], DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK]] 2023-07-12 10:58:27,790 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:27,791 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:27,791 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 10:58:27,791 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 10:58:27,791 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 10:58:27,791 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:27,791 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 10:58:27,791 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 10:58:27,794 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 10:58:27,795 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info 2023-07-12 10:58:27,795 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info 2023-07-12 10:58:27,795 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 10:58:27,799 INFO [RS:1;jenkins-hbase9:33873] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C33873%2C1689159506858, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,33873,1689159506858, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:27,799 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:27,800 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:27,800 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:27,813 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b49a1b1f3cde4a5f98186fb585abc133 2023-07-12 10:58:27,832 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f03719a871834b2389c705f4609fdcac 2023-07-12 10:58:27,832 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/f03719a871834b2389c705f4609fdcac 2023-07-12 10:58:27,837 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:27,837 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 10:58:27,837 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:27,839 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:27,839 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:27,839 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 10:58:27,841 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:27,849 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:27,849 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:27,849 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:27,850 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:27,850 INFO [RS:2;jenkins-hbase9:41887] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41887,1689159507026/jenkins-hbase9.apache.org%2C41887%2C1689159507026.1689159507775 2023-07-12 10:58:27,851 INFO [RS:0;jenkins-hbase9:34455] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,34455,1689159506648/jenkins-hbase9.apache.org%2C34455%2C1689159506648.1689159507789 2023-07-12 10:58:27,851 DEBUG [RS:2;jenkins-hbase9:41887] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK], DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK]] 2023-07-12 10:58:27,851 DEBUG [RS:0;jenkins-hbase9:34455] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK]] 2023-07-12 10:58:27,854 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0ce0a95344df49e981f2d4e98996f0d6 2023-07-12 10:58:27,854 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier/0ce0a95344df49e981f2d4e98996f0d6 2023-07-12 10:58:27,858 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:27,858 INFO [RS:1;jenkins-hbase9:33873] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,33873,1689159506858/jenkins-hbase9.apache.org%2C33873%2C1689159506858.1689159507799 2023-07-12 10:58:27,858 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 10:58:27,859 DEBUG [RS:1;jenkins-hbase9:33873] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK]] 2023-07-12 10:58:27,859 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table 2023-07-12 10:58:27,859 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table 2023-07-12 10:58:27,860 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 10:58:27,868 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3d9afdddc7e1488ba950023ac0c57891 2023-07-12 10:58:27,868 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/3d9afdddc7e1488ba950023ac0c57891 2023-07-12 10:58:27,874 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/a69ee6b8f1cc4724a5b721bd5c87f29a 2023-07-12 10:58:27,874 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:27,875 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740 2023-07-12 10:58:27,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740 2023-07-12 10:58:27,878 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 10:58:27,880 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 10:58:27,880 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=160; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10768747360, jitterRate=0.002917751669883728}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 10:58:27,880 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 10:58:27,881 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=126, masterSystemTime=1689159507751 2023-07-12 10:58:27,884 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 10:58:27,885 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 10:58:27,885 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,41887,1689159507026, state=OPEN 2023-07-12 10:58:27,886 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:58:27,887 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:27,888 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=126, resume processing ppid=125 2023-07-12 10:58:27,888 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=126, ppid=125, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,41887,1689159507026 in 289 msec 2023-07-12 10:58:27,890 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-12 10:58:27,890 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 450 msec 2023-07-12 10:58:28,037 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:28,038 WARN [RS-EventLoopGroup-12-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase9.apache.org/172.31.2.10:45597 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:45597 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:28,039 DEBUG [RS-EventLoopGroup-12-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase9.apache.org/172.31.2.10:45597 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:45597 2023-07-12 10:58:28,145 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase9.apache.org/172.31.2.10:45597 this server is in the failed servers list 2023-07-12 10:58:28,351 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase9.apache.org/172.31.2.10:45597 this server is in the failed servers list 2023-07-12 10:58:28,523 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 10:58:28,658 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase9.apache.org/172.31.2.10:45597 this server is in the failed servers list 2023-07-12 10:58:29,139 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1705ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1504ms 2023-07-12 10:58:29,166 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase9.apache.org/172.31.2.10:45597 this server is in the failed servers list 2023-07-12 10:58:30,179 WARN [RS-EventLoopGroup-12-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase9.apache.org/172.31.2.10:45597 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:45597 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:30,180 DEBUG [RS-EventLoopGroup-12-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase9.apache.org/172.31.2.10:45597 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:45597 2023-07-12 10:58:30,642 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3208ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3007ms 2023-07-12 10:58:31,944 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4510ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-12 10:58:31,944 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 10:58:31,948 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPEN, lastHost=jenkins-hbase9.apache.org,43635,1689159491271, regionLocation=jenkins-hbase9.apache.org,43635,1689159491271, openSeqNum=17 2023-07-12 10:58:31,949 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=0832c48321f808d3b4d6fb68605b1448, regionState=OPEN, lastHost=jenkins-hbase9.apache.org,45597,1689159484713, regionLocation=jenkins-hbase9.apache.org,45597,1689159484713, openSeqNum=41 2023-07-12 10:58:31,949 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 10:58:31,949 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689159571949 2023-07-12 10:58:31,949 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689159631949 2023-07-12 10:58:31,949 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-12 10:58:31,978 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase9.apache.org,43117,1689159488336 had 1 regions 2023-07-12 10:58:31,978 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43835,1689159506481-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:31,978 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43835,1689159506481-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:31,978 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43835,1689159506481-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:31,978 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase9:43835, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:31,978 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:31,979 INFO [PEWorker-4] procedure.ServerCrashProcedure(199): jenkins-hbase9.apache.org,43635,1689159491271 had 1 regions 2023-07-12 10:58:31,979 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase9.apache.org,42501,1689159484335 had 0 regions 2023-07-12 10:58:31,980 WARN [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. is NOT online; state={e5addb24bba6e8be9d4cddc12a45ff25 state=OPEN, ts=1689159511948, server=jenkins-hbase9.apache.org,43635,1689159491271}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-12 10:58:31,979 INFO [PEWorker-1] procedure.ServerCrashProcedure(199): jenkins-hbase9.apache.org,45597,1689159484713 had 1 regions 2023-07-12 10:58:31,980 INFO [PEWorker-4] procedure.ServerCrashProcedure(300): Splitting WALs pid=122, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,43635,1689159491271, splitWal=true, meta=false, isMeta: false 2023-07-12 10:58:31,980 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=124, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,42501,1689159484335, splitWal=true, meta=false, isMeta: false 2023-07-12 10:58:31,982 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=121, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,45597,1689159484713, splitWal=true, meta=false, isMeta: false 2023-07-12 10:58:31,982 DEBUG [PEWorker-4] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43635,1689159491271-splitting 2023-07-12 10:58:31,983 INFO [PEWorker-4] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43635,1689159491271-splitting dir is empty, no logs to split. 2023-07-12 10:58:31,983 INFO [PEWorker-4] master.SplitWALManager(106): jenkins-hbase9.apache.org,43635,1689159491271 WAL count=0, meta=false 2023-07-12 10:58:31,983 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,42501,1689159484335-splitting 2023-07-12 10:58:31,985 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,42501,1689159484335-splitting dir is empty, no logs to split. 2023-07-12 10:58:31,985 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase9.apache.org,42501,1689159484335 WAL count=0, meta=false 2023-07-12 10:58:31,985 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,45597,1689159484713-splitting 2023-07-12 10:58:31,990 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=123, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,43117,1689159488336, splitWal=true, meta=true, isMeta: false 2023-07-12 10:58:31,991 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,45597,1689159484713-splitting dir is empty, no logs to split. 2023-07-12 10:58:31,991 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase9.apache.org,45597,1689159484713 WAL count=0, meta=false 2023-07-12 10:58:31,993 INFO [PEWorker-4] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43635,1689159491271-splitting dir is empty, no logs to split. 2023-07-12 10:58:31,993 INFO [PEWorker-4] master.SplitWALManager(106): jenkins-hbase9.apache.org,43635,1689159491271 WAL count=0, meta=false 2023-07-12 10:58:31,993 DEBUG [PEWorker-4] procedure.ServerCrashProcedure(290): Check if jenkins-hbase9.apache.org,43635,1689159491271 WAL splitting is done? wals=0, meta=false 2023-07-12 10:58:31,994 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,42501,1689159484335-splitting dir is empty, no logs to split. 2023-07-12 10:58:31,994 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase9.apache.org,42501,1689159484335 WAL count=0, meta=false 2023-07-12 10:58:31,994 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase9.apache.org,42501,1689159484335 WAL splitting is done? wals=0, meta=false 2023-07-12 10:58:31,998 WARN [master/jenkins-hbase9:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase9.apache.org,43635,1689159491271/hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25., unknown_server=jenkins-hbase9.apache.org,45597,1689159484713/hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:32,002 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43117,1689159488336-splitting dir is empty, no logs to split. 2023-07-12 10:58:32,002 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase9.apache.org,43117,1689159488336 WAL count=0, meta=false 2023-07-12 10:58:32,003 INFO [PEWorker-4] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase9.apache.org,43635,1689159491271 failed, ignore...File hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43635,1689159491271-splitting does not exist. 2023-07-12 10:58:32,003 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,45597,1689159484713-splitting dir is empty, no logs to split. 2023-07-12 10:58:32,003 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase9.apache.org,45597,1689159484713 WAL count=0, meta=false 2023-07-12 10:58:32,003 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase9.apache.org,45597,1689159484713 WAL splitting is done? wals=0, meta=false 2023-07-12 10:58:32,004 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=122, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, ASSIGN}] 2023-07-12 10:58:32,005 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=127, ppid=122, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, ASSIGN 2023-07-12 10:58:32,005 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=127, ppid=122, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-12 10:58:32,006 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase9.apache.org,42501,1689159484335 failed, ignore...File hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,42501,1689159484335-splitting does not exist. 2023-07-12 10:58:32,007 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase9.apache.org,42501,1689159484335 after splitting done 2023-07-12 10:58:32,008 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase9.apache.org,42501,1689159484335 from processing; numProcessing=3 2023-07-12 10:58:32,008 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,43117,1689159488336-splitting dir is empty, no logs to split. 2023-07-12 10:58:32,008 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase9.apache.org,43117,1689159488336 WAL count=0, meta=false 2023-07-12 10:58:32,008 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase9.apache.org,43117,1689159488336 WAL splitting is done? wals=0, meta=false 2023-07-12 10:58:32,009 INFO [PEWorker-1] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase9.apache.org,45597,1689159484713 failed, ignore...File hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,45597,1689159484713-splitting does not exist. 2023-07-12 10:58:32,010 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=124, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,42501,1689159484335, splitWal=true, meta=false in 4.6220 sec 2023-07-12 10:58:32,011 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase9.apache.org,43117,1689159488336 after splitting done 2023-07-12 10:58:32,011 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=121, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, ASSIGN}] 2023-07-12 10:58:32,011 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase9.apache.org,43117,1689159488336 from processing; numProcessing=2 2023-07-12 10:58:32,012 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=128, ppid=121, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, ASSIGN 2023-07-12 10:58:32,013 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,43117,1689159488336, splitWal=true, meta=true in 4.6400 sec 2023-07-12 10:58:32,013 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=128, ppid=121, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-12 10:58:32,013 DEBUG [jenkins-hbase9:43835] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 10:58:32,013 DEBUG [jenkins-hbase9:43835] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:32,014 DEBUG [jenkins-hbase9:43835] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:32,014 DEBUG [jenkins-hbase9:43835] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:32,014 DEBUG [jenkins-hbase9:43835] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:32,014 DEBUG [jenkins-hbase9:43835] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-12 10:58:32,016 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:32,016 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159512016"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159512016"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159512016"}]},"ts":"1689159512016"} 2023-07-12 10:58:32,016 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:32,017 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159512016"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159512016"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159512016"}]},"ts":"1689159512016"} 2023-07-12 10:58:32,018 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=129, ppid=128, state=RUNNABLE; OpenRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,34455,1689159506648}] 2023-07-12 10:58:32,019 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=127, state=RUNNABLE; OpenRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,33873,1689159506858}] 2023-07-12 10:58:32,182 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:32,183 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:32,183 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:32,183 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:32,185 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:47362, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:32,185 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:50364, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:32,190 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:32,190 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0832c48321f808d3b4d6fb68605b1448, NAME => 'hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:32,190 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:32,191 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e5addb24bba6e8be9d4cddc12a45ff25, NAME => 'hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:32,191 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:32,191 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:32,191 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. service=MultiRowMutationService 2023-07-12 10:58:32,191 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:32,191 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 10:58:32,191 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:32,191 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:32,191 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:32,191 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:32,191 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:32,191 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:32,193 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:32,194 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m 2023-07-12 10:58:32,195 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m 2023-07-12 10:58:32,195 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0832c48321f808d3b4d6fb68605b1448 columnFamilyName m 2023-07-12 10:58:32,198 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:32,199 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info 2023-07-12 10:58:32,199 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info 2023-07-12 10:58:32,199 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e5addb24bba6e8be9d4cddc12a45ff25 columnFamilyName info 2023-07-12 10:58:32,208 WARN [RS-EventLoopGroup-12-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase9.apache.org/172.31.2.10:45597 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:45597 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:32,210 DEBUG [RS-EventLoopGroup-12-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase9.apache.org/172.31.2.10:45597 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:45597 2023-07-12 10:58:32,210 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=46, started=4183 ms ago, cancelled=false, msg=Call to address=jenkins-hbase9.apache.org/172.31.2.10:45597 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:45597, details=row '\x00' on table 'hbase:rsgroup' at region=hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448., hostname=jenkins-hbase9.apache.org,45597,1689159484713, seqNum=41, see https://s.apache.org/timeout, exception=java.net.ConnectException: Call to address=jenkins-hbase9.apache.org/172.31.2.10:45597 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:45597 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:186) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:385) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:99) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:398) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:368) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:396) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:195) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$300(NettyRpcConnection.java:76) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:296) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:287) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:674) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:693) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:45597 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:32,210 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 34bd11963bd04d34b7e2994e45ec4653 2023-07-12 10:58:32,211 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/34bd11963bd04d34b7e2994e45ec4653 2023-07-12 10:58:32,218 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/60a7cefcb3894d8ba483b968c9da2362 2023-07-12 10:58:32,220 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/8fa7cb9488f24a899b6cdde7163b9c4c 2023-07-12 10:58:32,227 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a46aa6de7a2d409792a23a50cbb46fc7 2023-07-12 10:58:32,227 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/a46aa6de7a2d409792a23a50cbb46fc7 2023-07-12 10:58:32,228 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(310): Store=0832c48321f808d3b4d6fb68605b1448/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:32,229 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:32,230 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/7c61e3ca2f7f49229ba8ba16c44c26fc 2023-07-12 10:58:32,230 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:32,234 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:32,235 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 0832c48321f808d3b4d6fb68605b1448; next sequenceid=83; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4620f3c6, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:32,235 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:32,238 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448., pid=129, masterSystemTime=1689159512182 2023-07-12 10:58:32,239 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f991ed8007c04dac837a7f0bdde5ce19 2023-07-12 10:58:32,239 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/f991ed8007c04dac837a7f0bdde5ce19 2023-07-12 10:58:32,239 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(310): Store=e5addb24bba6e8be9d4cddc12a45ff25/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:32,240 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:32,241 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:32,241 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-12 10:58:32,246 DEBUG [RS:0;jenkins-hbase9:34455-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-12 10:58:32,248 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:32,249 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:32,249 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:32,250 DEBUG [RS:0;jenkins-hbase9:34455-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 16259 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-12 10:58:32,250 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=OPEN, openSeqNum=83, regionLocation=jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:32,251 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e5addb24bba6e8be9d4cddc12a45ff25; next sequenceid=27; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9511339680, jitterRate=-0.11418746411800385}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:32,251 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159512250"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159512250"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159512250"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159512250"}]},"ts":"1689159512250"} 2023-07-12 10:58:32,251 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:32,252 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25., pid=130, masterSystemTime=1689159512183 2023-07-12 10:58:32,253 DEBUG [RS:0;jenkins-hbase9:34455-shortCompactions-0] regionserver.HStore(1912): 0832c48321f808d3b4d6fb68605b1448/m is initiating minor compaction (all files) 2023-07-12 10:58:32,253 INFO [RS:0;jenkins-hbase9:34455-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 0832c48321f808d3b4d6fb68605b1448/m in hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:32,253 INFO [RS:0;jenkins-hbase9:34455-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/8fa7cb9488f24a899b6cdde7163b9c4c, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/34bd11963bd04d34b7e2994e45ec4653, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/a46aa6de7a2d409792a23a50cbb46fc7] into tmpdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp, totalSize=15.9 K 2023-07-12 10:58:32,254 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=129, resume processing ppid=128 2023-07-12 10:58:32,254 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=129, ppid=128, state=SUCCESS; OpenRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,34455,1689159506648 in 234 msec 2023-07-12 10:58:32,255 DEBUG [RS:0;jenkins-hbase9:34455-shortCompactions-0] compactions.Compactor(207): Compacting 8fa7cb9488f24a899b6cdde7163b9c4c, keycount=3, bloomtype=ROW, size=5.1 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1689159488713 2023-07-12 10:58:32,255 DEBUG [RS:0;jenkins-hbase9:34455-shortCompactions-0] compactions.Compactor(207): Compacting 34bd11963bd04d34b7e2994e45ec4653, keycount=10, bloomtype=ROW, size=5.4 K, encoding=NONE, compression=NONE, seqNum=37, earliestPutTs=1689159495366 2023-07-12 10:58:32,256 DEBUG [RS:0;jenkins-hbase9:34455-shortCompactions-0] compactions.Compactor(207): Compacting a46aa6de7a2d409792a23a50cbb46fc7, keycount=14, bloomtype=ROW, size=5.5 K, encoding=NONE, compression=NONE, seqNum=79, earliestPutTs=1689159503562 2023-07-12 10:58:32,256 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=121 2023-07-12 10:58:32,256 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase9.apache.org,45597,1689159484713 after splitting done 2023-07-12 10:58:32,256 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=121, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, ASSIGN in 243 msec 2023-07-12 10:58:32,256 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase9.apache.org,45597,1689159484713 from processing; numProcessing=1 2023-07-12 10:58:32,258 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=121, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,45597,1689159484713, splitWal=true, meta=false in 4.8900 sec 2023-07-12 10:58:32,264 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-12 10:58:32,266 DEBUG [RS:1;jenkins-hbase9:33873-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-12 10:58:32,269 DEBUG [RS:1;jenkins-hbase9:33873-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14909 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-12 10:58:32,269 DEBUG [RS:1;jenkins-hbase9:33873-shortCompactions-0] regionserver.HStore(1912): e5addb24bba6e8be9d4cddc12a45ff25/info is initiating minor compaction (all files) 2023-07-12 10:58:32,269 INFO [RS:1;jenkins-hbase9:33873-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of e5addb24bba6e8be9d4cddc12a45ff25/info in hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:32,269 INFO [RS:1;jenkins-hbase9:33873-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/7c61e3ca2f7f49229ba8ba16c44c26fc, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/60a7cefcb3894d8ba483b968c9da2362, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/f991ed8007c04dac837a7f0bdde5ce19] into tmpdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/.tmp, totalSize=14.6 K 2023-07-12 10:58:32,270 DEBUG [RS:1;jenkins-hbase9:33873-shortCompactions-0] compactions.Compactor(207): Compacting 7c61e3ca2f7f49229ba8ba16c44c26fc, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=6, earliestPutTs=1689159488053 2023-07-12 10:58:32,270 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:32,271 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:32,274 DEBUG [RS:1;jenkins-hbase9:33873-shortCompactions-0] compactions.Compactor(207): Compacting 60a7cefcb3894d8ba483b968c9da2362, keycount=1, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1689159491720 2023-07-12 10:58:32,274 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPEN, openSeqNum=27, regionLocation=jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:32,274 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159512274"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159512274"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159512274"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159512274"}]},"ts":"1689159512274"} 2023-07-12 10:58:32,274 DEBUG [RS:1;jenkins-hbase9:33873-shortCompactions-0] compactions.Compactor(207): Compacting f991ed8007c04dac837a7f0bdde5ce19, keycount=2, bloomtype=ROW, size=4.9 K, encoding=NONE, compression=NONE, seqNum=23, earliestPutTs=9223372036854775807 2023-07-12 10:58:32,282 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=127 2023-07-12 10:58:32,282 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=127, state=SUCCESS; OpenRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,33873,1689159506858 in 261 msec 2023-07-12 10:58:32,285 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=122 2023-07-12 10:58:32,285 INFO [PEWorker-5] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase9.apache.org,43635,1689159491271 after splitting done 2023-07-12 10:58:32,285 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=122, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, ASSIGN in 279 msec 2023-07-12 10:58:32,285 DEBUG [PEWorker-5] master.DeadServer(114): Removed jenkins-hbase9.apache.org,43635,1689159491271 from processing; numProcessing=0 2023-07-12 10:58:32,287 INFO [RS:0;jenkins-hbase9:34455-shortCompactions-0] throttle.PressureAwareThroughputController(145): 0832c48321f808d3b4d6fb68605b1448#m#compaction#12 average throughput is 0.25 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-12 10:58:32,290 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=122, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,43635,1689159491271, splitWal=true, meta=false in 4.9160 sec 2023-07-12 10:58:32,313 INFO [RS:1;jenkins-hbase9:33873-shortCompactions-0] throttle.PressureAwareThroughputController(145): e5addb24bba6e8be9d4cddc12a45ff25#info#compaction#13 average throughput is 0.14 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-12 10:58:32,332 DEBUG [RS:0;jenkins-hbase9:34455-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/bffd8719b2904374a71e69e548411438 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/bffd8719b2904374a71e69e548411438 2023-07-12 10:58:32,350 DEBUG [RS:0;jenkins-hbase9:34455-shortCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 10:58:32,351 INFO [RS:0;jenkins-hbase9:34455-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 0832c48321f808d3b4d6fb68605b1448/m of 0832c48321f808d3b4d6fb68605b1448 into bffd8719b2904374a71e69e548411438(size=5.1 K), total size for store is 5.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-12 10:58:32,351 DEBUG [RS:0;jenkins-hbase9:34455-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:32,352 INFO [RS:0;jenkins-hbase9:34455-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448., storeName=0832c48321f808d3b4d6fb68605b1448/m, priority=13, startTime=1689159512241; duration=0sec 2023-07-12 10:58:32,352 DEBUG [RS:0;jenkins-hbase9:34455-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-12 10:58:32,353 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-12 10:58:32,353 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver Metrics about HBase MasterObservers 2023-07-12 10:58:32,356 DEBUG [RS:1;jenkins-hbase9:33873-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/.tmp/info/d15ac85af42147dfab1746d5f141cc5a as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/d15ac85af42147dfab1746d5f141cc5a 2023-07-12 10:58:32,362 DEBUG [RS:1;jenkins-hbase9:33873-shortCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 10:58:32,363 INFO [RS:1;jenkins-hbase9:33873-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in e5addb24bba6e8be9d4cddc12a45ff25/info of e5addb24bba6e8be9d4cddc12a45ff25 into d15ac85af42147dfab1746d5f141cc5a(size=5.0 K), total size for store is 5.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-12 10:58:32,363 DEBUG [RS:1;jenkins-hbase9:33873-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:32,363 INFO [RS:1;jenkins-hbase9:33873-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25., storeName=e5addb24bba6e8be9d4cddc12a45ff25/info, priority=13, startTime=1689159512252; duration=0sec 2023-07-12 10:58:32,363 DEBUG [RS:1;jenkins-hbase9:33873-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-12 10:58:32,981 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-12 10:58:32,987 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:32,988 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:50370, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:32,999 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 10:58:33,001 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 10:58:33,001 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 5.797sec 2023-07-12 10:58:33,001 INFO [master/jenkins-hbase9:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-12 10:58:33,001 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:33,002 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=131, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-12 10:58:33,002 INFO [master/jenkins-hbase9:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-12 10:58:33,004 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=131, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:33,005 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=131, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:33,006 INFO [master/jenkins-hbase9:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-12 10:58:33,006 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:33,007 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6 empty. 2023-07-12 10:58:33,007 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:33,007 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-12 10:58:33,010 INFO [master/jenkins-hbase9:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-12 10:58:33,010 INFO [master/jenkins-hbase9:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-12 10:58:33,013 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:33,013 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:33,013 INFO [master/jenkins-hbase9:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 10:58:33,013 INFO [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 10:58:33,013 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43835,1689159506481-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 10:58:33,014 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43835,1689159506481-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 10:58:33,014 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 10:58:33,021 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:33,022 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => b71f0d9015a7b2292849acab5e81c0c6, NAME => 'hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.tmp 2023-07-12 10:58:33,033 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:33,033 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing b71f0d9015a7b2292849acab5e81c0c6, disabling compactions & flushes 2023-07-12 10:58:33,033 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:33,033 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:33,033 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. after waiting 0 ms 2023-07-12 10:58:33,033 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:33,033 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:33,033 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for b71f0d9015a7b2292849acab5e81c0c6: 2023-07-12 10:58:33,035 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=131, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:33,038 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689159513038"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159513038"}]},"ts":"1689159513038"} 2023-07-12 10:58:33,039 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:33,040 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=131, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:33,040 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159513040"}]},"ts":"1689159513040"} 2023-07-12 10:58:33,041 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-12 10:58:33,048 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:33,048 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:33,049 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:33,049 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:33,049 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:33,049 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=132, ppid=131, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=b71f0d9015a7b2292849acab5e81c0c6, ASSIGN}] 2023-07-12 10:58:33,051 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, ppid=131, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=b71f0d9015a7b2292849acab5e81c0c6, ASSIGN 2023-07-12 10:58:33,051 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=132, ppid=131, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=b71f0d9015a7b2292849acab5e81c0c6, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,33873,1689159506858; forceNewPlan=false, retain=false 2023-07-12 10:58:33,086 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(139): Connect 0x7d31982a to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:33,092 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@39fd3ec7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:33,093 DEBUG [hconnection-0x6ce90710-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:33,095 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55230, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:33,099 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-12 10:58:33,100 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7d31982a to 127.0.0.1:49301 2023-07-12 10:58:33,100 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:33,101 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase9.apache.org:43835 after: jenkins-hbase9.apache.org:43835 2023-07-12 10:58:33,101 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(139): Connect 0x1a1bd217 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:33,110 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3eee87c3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:33,110 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:33,202 INFO [jenkins-hbase9:43835] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:33,203 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=b71f0d9015a7b2292849acab5e81c0c6, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:33,204 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689159513203"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159513203"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159513203"}]},"ts":"1689159513203"} 2023-07-12 10:58:33,205 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; OpenRegionProcedure b71f0d9015a7b2292849acab5e81c0c6, server=jenkins-hbase9.apache.org,33873,1689159506858}] 2023-07-12 10:58:33,360 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:33,361 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b71f0d9015a7b2292849acab5e81c0c6, NAME => 'hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:33,361 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:33,361 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:33,361 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:33,361 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:33,362 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:33,364 DEBUG [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/q 2023-07-12 10:58:33,364 DEBUG [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/q 2023-07-12 10:58:33,364 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b71f0d9015a7b2292849acab5e81c0c6 columnFamilyName q 2023-07-12 10:58:33,365 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] regionserver.HStore(310): Store=b71f0d9015a7b2292849acab5e81c0c6/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:33,365 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:33,366 DEBUG [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/u 2023-07-12 10:58:33,366 DEBUG [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/u 2023-07-12 10:58:33,367 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b71f0d9015a7b2292849acab5e81c0c6 columnFamilyName u 2023-07-12 10:58:33,367 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] regionserver.HStore(310): Store=b71f0d9015a7b2292849acab5e81c0c6/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:33,368 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:33,368 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:33,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-12 10:58:33,372 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:33,374 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:33,374 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened b71f0d9015a7b2292849acab5e81c0c6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11365314720, jitterRate=0.05847741663455963}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-12 10:58:33,374 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for b71f0d9015a7b2292849acab5e81c0c6: 2023-07-12 10:58:33,375 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6., pid=133, masterSystemTime=1689159513357 2023-07-12 10:58:33,377 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:33,377 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:33,377 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=b71f0d9015a7b2292849acab5e81c0c6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:33,377 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689159513377"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159513377"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159513377"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159513377"}]},"ts":"1689159513377"} 2023-07-12 10:58:33,380 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-12 10:58:33,380 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; OpenRegionProcedure b71f0d9015a7b2292849acab5e81c0c6, server=jenkins-hbase9.apache.org,33873,1689159506858 in 173 msec 2023-07-12 10:58:33,382 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=131 2023-07-12 10:58:33,382 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=131, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=b71f0d9015a7b2292849acab5e81c0c6, ASSIGN in 331 msec 2023-07-12 10:58:33,382 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=131, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:33,382 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159513382"}]},"ts":"1689159513382"} 2023-07-12 10:58:33,384 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-12 10:58:33,387 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=131, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:33,389 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=131, state=SUCCESS; CreateTableProcedure table=hbase:quota in 386 msec 2023-07-12 10:58:33,611 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 10:58:33,649 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-12 10:58:36,219 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:36,220 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:47378, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:36,221 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 10:58:36,221 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 10:58:36,229 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:36,229 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:36,230 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:36,231 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-12 10:58:36,231 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43835,1689159506481] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 10:58:36,314 DEBUG [Listener at localhost/44831] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 10:58:36,316 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:46812, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 10:58:36,318 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-12 10:58:36,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43835] master.MasterRpcServices(492): Client=jenkins//172.31.2.10 set balanceSwitch=false 2023-07-12 10:58:36,320 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(139): Connect 0x483e2599 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:36,328 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5cfb3585, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:36,328 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:36,331 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [90,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:36,334 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:36,334 DEBUG [Listener at localhost/44831] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:36,338 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015920fb08001b connected 2023-07-12 10:58:36,340 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55242, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:36,347 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-12 10:58:36,347 INFO [Listener at localhost/44831] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 10:58:36,347 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1a1bd217 to 127.0.0.1:49301 2023-07-12 10:58:36,347 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:36,347 DEBUG [Listener at localhost/44831] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 10:58:36,347 DEBUG [Listener at localhost/44831] util.JVMClusterUtil(257): Found active master hash=2143282506, stopped=false 2023-07-12 10:58:36,347 DEBUG [Listener at localhost/44831] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 10:58:36,347 DEBUG [Listener at localhost/44831] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 10:58:36,347 DEBUG [Listener at localhost/44831] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-12 10:58:36,347 INFO [Listener at localhost/44831] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase9.apache.org,43835,1689159506481 2023-07-12 10:58:36,349 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:36,349 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:36,349 INFO [Listener at localhost/44831] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 10:58:36,349 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:36,349 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:36,350 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:36,350 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:36,350 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:36,350 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4207495e to 127.0.0.1:49301 2023-07-12 10:58:36,351 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:36,351 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:36,351 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:36,351 INFO [Listener at localhost/44831] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,34455,1689159506648' ***** 2023-07-12 10:58:36,351 INFO [Listener at localhost/44831] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:36,351 INFO [Listener at localhost/44831] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,33873,1689159506858' ***** 2023-07-12 10:58:36,351 INFO [Listener at localhost/44831] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:36,351 INFO [Listener at localhost/44831] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,41887,1689159507026' ***** 2023-07-12 10:58:36,351 INFO [Listener at localhost/44831] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:36,351 INFO [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:36,351 INFO [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:36,351 INFO [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:36,354 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:36,354 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:36,365 INFO [RS:0;jenkins-hbase9:34455] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7898a2c5{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:36,365 INFO [RS:1;jenkins-hbase9:33873] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@fc6a1d4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:36,366 INFO [RS:2;jenkins-hbase9:41887] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2e5f2415{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:36,366 INFO [RS:0;jenkins-hbase9:34455] server.AbstractConnector(383): Stopped ServerConnector@58e288c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:36,366 INFO [RS:1;jenkins-hbase9:33873] server.AbstractConnector(383): Stopped ServerConnector@48d69e11{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:36,366 INFO [RS:0;jenkins-hbase9:34455] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:36,366 INFO [RS:1;jenkins-hbase9:33873] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:36,366 INFO [RS:2;jenkins-hbase9:41887] server.AbstractConnector(383): Stopped ServerConnector@1c1fe29d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:36,366 INFO [RS:1;jenkins-hbase9:33873] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@cd3c0c4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:36,366 INFO [RS:0;jenkins-hbase9:34455] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@14fd5ddc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:36,366 INFO [RS:1;jenkins-hbase9:33873] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@201b69f1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:36,366 INFO [RS:2;jenkins-hbase9:41887] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:36,367 INFO [RS:0;jenkins-hbase9:34455] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6047944f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:36,367 INFO [RS:2;jenkins-hbase9:41887] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3acbc147{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:36,367 INFO [RS:1;jenkins-hbase9:33873] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:36,367 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:36,367 INFO [RS:2;jenkins-hbase9:41887] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@63cd7f67{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:36,367 INFO [RS:1;jenkins-hbase9:33873] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:36,368 INFO [RS:0;jenkins-hbase9:34455] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:36,368 INFO [RS:0;jenkins-hbase9:34455] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:36,369 INFO [RS:0;jenkins-hbase9:34455] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:36,369 INFO [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(3305): Received CLOSE for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:36,369 INFO [RS:2;jenkins-hbase9:41887] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:36,368 INFO [RS:1;jenkins-hbase9:33873] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:36,369 INFO [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(3305): Received CLOSE for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:36,369 INFO [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:36,369 DEBUG [RS:0;jenkins-hbase9:34455] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3e70f1bf to 127.0.0.1:49301 2023-07-12 10:58:36,370 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 0832c48321f808d3b4d6fb68605b1448, disabling compactions & flushes 2023-07-12 10:58:36,370 DEBUG [RS:0;jenkins-hbase9:34455] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:36,370 INFO [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 10:58:36,370 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:36,370 INFO [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(3305): Received CLOSE for b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:36,371 INFO [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:36,371 DEBUG [RS:1;jenkins-hbase9:33873] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x693f06b5 to 127.0.0.1:49301 2023-07-12 10:58:36,371 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e5addb24bba6e8be9d4cddc12a45ff25, disabling compactions & flushes 2023-07-12 10:58:36,370 INFO [RS:2;jenkins-hbase9:41887] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:36,370 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:36,371 INFO [RS:2;jenkins-hbase9:41887] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:36,371 INFO [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:36,371 DEBUG [RS:2;jenkins-hbase9:41887] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2bbf94ff to 127.0.0.1:49301 2023-07-12 10:58:36,371 DEBUG [RS:2;jenkins-hbase9:41887] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:36,372 INFO [RS:2;jenkins-hbase9:41887] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:36,372 INFO [RS:2;jenkins-hbase9:41887] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:36,372 INFO [RS:2;jenkins-hbase9:41887] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:36,372 INFO [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 10:58:36,370 DEBUG [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(1478): Online Regions={0832c48321f808d3b4d6fb68605b1448=hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.} 2023-07-12 10:58:36,370 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:36,372 DEBUG [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(1504): Waiting on 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:36,371 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:36,371 DEBUG [RS:1;jenkins-hbase9:33873] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:36,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:36,372 INFO [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-12 10:58:36,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:36,372 DEBUG [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(1478): Online Regions={e5addb24bba6e8be9d4cddc12a45ff25=hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25., b71f0d9015a7b2292849acab5e81c0c6=hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.} 2023-07-12 10:58:36,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. after waiting 0 ms 2023-07-12 10:58:36,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. after waiting 0 ms 2023-07-12 10:58:36,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:36,372 DEBUG [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(1504): Waiting on b71f0d9015a7b2292849acab5e81c0c6, e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:36,372 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 0832c48321f808d3b4d6fb68605b1448 1/1 column families, dataSize=242 B heapSize=648 B 2023-07-12 10:58:36,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:36,373 INFO [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 10:58:36,373 DEBUG [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-12 10:58:36,373 DEBUG [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 10:58:36,381 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 10:58:36,382 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 10:58:36,382 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 10:58:36,382 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 10:58:36,383 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 10:58:36,383 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.05 KB heapSize=5.87 KB 2023-07-12 10:58:36,389 DEBUG [StoreCloser-hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/7c61e3ca2f7f49229ba8ba16c44c26fc, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/60a7cefcb3894d8ba483b968c9da2362, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/f991ed8007c04dac837a7f0bdde5ce19] to archive 2023-07-12 10:58:36,390 DEBUG [StoreCloser-hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-12 10:58:36,394 DEBUG [StoreCloser-hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/7c61e3ca2f7f49229ba8ba16c44c26fc to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/7c61e3ca2f7f49229ba8ba16c44c26fc 2023-07-12 10:58:36,396 DEBUG [StoreCloser-hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/60a7cefcb3894d8ba483b968c9da2362 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/60a7cefcb3894d8ba483b968c9da2362 2023-07-12 10:58:36,397 DEBUG [StoreCloser-hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/f991ed8007c04dac837a7f0bdde5ce19 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/f991ed8007c04dac837a7f0bdde5ce19 2023-07-12 10:58:36,430 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=242 B at sequenceid=87 (bloomFilter=true), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/ff557c1700754976a0534ed7a4fce455 2023-07-12 10:58:36,433 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.97 KB at sequenceid=171 (bloomFilter=false), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/info/b454ddcab3764fdc8af2fffba894338f 2023-07-12 10:58:36,436 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:36,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/ff557c1700754976a0534ed7a4fce455 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/ff557c1700754976a0534ed7a4fce455 2023-07-12 10:58:36,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/recovered.edits/30.seqid, newMaxSeqId=30, maxSeqId=26 2023-07-12 10:58:36,453 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/ff557c1700754976a0534ed7a4fce455, entries=2, sequenceid=87, filesize=5.0 K 2023-07-12 10:58:36,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~242 B/242, heapSize ~632 B/632, currentSize=0 B/0 for 0832c48321f808d3b4d6fb68605b1448 in 82ms, sequenceid=87, compaction requested=false 2023-07-12 10:58:36,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:36,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:36,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:36,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing b71f0d9015a7b2292849acab5e81c0c6, disabling compactions & flushes 2023-07-12 10:58:36,460 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:36,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:36,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. after waiting 0 ms 2023-07-12 10:58:36,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:36,465 DEBUG [StoreCloser-hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/8fa7cb9488f24a899b6cdde7163b9c4c, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/34bd11963bd04d34b7e2994e45ec4653, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/a46aa6de7a2d409792a23a50cbb46fc7] to archive 2023-07-12 10:58:36,466 DEBUG [StoreCloser-hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-12 10:58:36,468 DEBUG [StoreCloser-hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/8fa7cb9488f24a899b6cdde7163b9c4c to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/8fa7cb9488f24a899b6cdde7163b9c4c 2023-07-12 10:58:36,470 DEBUG [StoreCloser-hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/34bd11963bd04d34b7e2994e45ec4653 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/34bd11963bd04d34b7e2994e45ec4653 2023-07-12 10:58:36,471 DEBUG [StoreCloser-hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/a46aa6de7a2d409792a23a50cbb46fc7 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/a46aa6de7a2d409792a23a50cbb46fc7 2023-07-12 10:58:36,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:36,481 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:36,481 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for b71f0d9015a7b2292849acab5e81c0c6: 2023-07-12 10:58:36,481 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:36,485 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=86 B at sequenceid=171 (bloomFilter=false), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/table/d1ffc01c5dfe485096e4d50a2844f7e1 2023-07-12 10:58:36,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/recovered.edits/90.seqid, newMaxSeqId=90, maxSeqId=82 2023-07-12 10:58:36,486 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:36,486 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:36,486 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:36,486 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:36,491 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/info/b454ddcab3764fdc8af2fffba894338f as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b454ddcab3764fdc8af2fffba894338f 2023-07-12 10:58:36,496 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b454ddcab3764fdc8af2fffba894338f, entries=26, sequenceid=171, filesize=7.7 K 2023-07-12 10:58:36,497 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/table/d1ffc01c5dfe485096e4d50a2844f7e1 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/d1ffc01c5dfe485096e4d50a2844f7e1 2023-07-12 10:58:36,504 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/d1ffc01c5dfe485096e4d50a2844f7e1, entries=2, sequenceid=171, filesize=4.7 K 2023-07-12 10:58:36,505 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.05 KB/3126, heapSize ~5.59 KB/5720, currentSize=0 B/0 for 1588230740 in 122ms, sequenceid=171, compaction requested=true 2023-07-12 10:58:36,518 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/recovered.edits/174.seqid, newMaxSeqId=174, maxSeqId=159 2023-07-12 10:58:36,519 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:36,519 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:36,519 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 10:58:36,519 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:36,572 INFO [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,34455,1689159506648; all regions closed. 2023-07-12 10:58:36,572 DEBUG [RS:0;jenkins-hbase9:34455] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 10:58:36,573 INFO [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,33873,1689159506858; all regions closed. 2023-07-12 10:58:36,573 DEBUG [RS:1;jenkins-hbase9:33873] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 10:58:36,573 INFO [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,41887,1689159507026; all regions closed. 2023-07-12 10:58:36,573 DEBUG [RS:2;jenkins-hbase9:41887] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 10:58:36,589 DEBUG [RS:0;jenkins-hbase9:34455] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:36,589 INFO [RS:0;jenkins-hbase9:34455] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C34455%2C1689159506648:(num 1689159507789) 2023-07-12 10:58:36,590 DEBUG [RS:0;jenkins-hbase9:34455] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:36,590 INFO [RS:0;jenkins-hbase9:34455] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:36,590 INFO [RS:0;jenkins-hbase9:34455] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:36,590 INFO [RS:0;jenkins-hbase9:34455] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:36,590 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:36,590 INFO [RS:0;jenkins-hbase9:34455] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:36,590 INFO [RS:0;jenkins-hbase9:34455] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:36,591 INFO [RS:0;jenkins-hbase9:34455] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:34455 2023-07-12 10:58:36,596 DEBUG [RS:1;jenkins-hbase9:33873] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:36,596 INFO [RS:1;jenkins-hbase9:33873] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C33873%2C1689159506858:(num 1689159507799) 2023-07-12 10:58:36,597 DEBUG [RS:1;jenkins-hbase9:33873] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:36,597 DEBUG [RS:2;jenkins-hbase9:41887] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:36,597 INFO [RS:2;jenkins-hbase9:41887] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C41887%2C1689159507026.meta:.meta(num 1689159507760) 2023-07-12 10:58:36,597 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:36,597 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:36,597 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:36,598 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,34455,1689159506648] 2023-07-12 10:58:36,598 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,34455,1689159506648; numProcessing=1 2023-07-12 10:58:36,597 INFO [RS:1;jenkins-hbase9:33873] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:36,597 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:36,598 INFO [RS:1;jenkins-hbase9:33873] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:36,597 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,34455,1689159506648 2023-07-12 10:58:36,599 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:36,599 INFO [RS:1;jenkins-hbase9:33873] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:36,599 INFO [RS:1;jenkins-hbase9:33873] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:36,599 INFO [RS:1;jenkins-hbase9:33873] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:36,598 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:36,599 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:36,600 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,34455,1689159506648 already deleted, retry=false 2023-07-12 10:58:36,600 INFO [RS:1;jenkins-hbase9:33873] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:33873 2023-07-12 10:58:36,600 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,34455,1689159506648 expired; onlineServers=2 2023-07-12 10:58:36,606 DEBUG [RS:2;jenkins-hbase9:41887] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:36,606 INFO [RS:2;jenkins-hbase9:41887] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C41887%2C1689159507026:(num 1689159507775) 2023-07-12 10:58:36,606 DEBUG [RS:2;jenkins-hbase9:41887] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:36,606 INFO [RS:2;jenkins-hbase9:41887] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:36,606 INFO [RS:2;jenkins-hbase9:41887] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:36,606 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:36,607 INFO [RS:2;jenkins-hbase9:41887] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:41887 2023-07-12 10:58:36,700 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:36,700 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:34455-0x1015920fb080011, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:36,700 INFO [RS:0;jenkins-hbase9:34455] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,34455,1689159506648; zookeeper connection closed. 2023-07-12 10:58:36,701 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@65c08fac] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@65c08fac 2023-07-12 10:58:36,703 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:36,703 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,41887,1689159507026 2023-07-12 10:58:36,703 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:36,708 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:36,708 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,33873,1689159506858 2023-07-12 10:58:36,708 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,41887,1689159507026] 2023-07-12 10:58:36,709 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,41887,1689159507026; numProcessing=2 2023-07-12 10:58:36,809 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:36,809 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41887-0x1015920fb080013, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:36,809 INFO [RS:2;jenkins-hbase9:41887] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,41887,1689159507026; zookeeper connection closed. 2023-07-12 10:58:36,809 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@c1e3ae8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@c1e3ae8 2023-07-12 10:58:36,810 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:36,810 INFO [RS:1;jenkins-hbase9:33873] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,33873,1689159506858; zookeeper connection closed. 2023-07-12 10:58:36,810 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:33873-0x1015920fb080012, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:36,811 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,41887,1689159507026 already deleted, retry=false 2023-07-12 10:58:36,811 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,41887,1689159507026 expired; onlineServers=1 2023-07-12 10:58:36,811 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,33873,1689159506858] 2023-07-12 10:58:36,811 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3a16ec5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3a16ec5 2023-07-12 10:58:36,811 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,33873,1689159506858; numProcessing=3 2023-07-12 10:58:36,811 INFO [Listener at localhost/44831] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-12 10:58:36,812 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,33873,1689159506858 already deleted, retry=false 2023-07-12 10:58:36,812 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,33873,1689159506858 expired; onlineServers=0 2023-07-12 10:58:36,812 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,43835,1689159506481' ***** 2023-07-12 10:58:36,812 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 10:58:36,812 DEBUG [M:0;jenkins-hbase9:43835] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b806e65, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:36,812 INFO [M:0;jenkins-hbase9:43835] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:36,814 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:36,815 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:36,815 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:36,815 INFO [M:0;jenkins-hbase9:43835] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4ca333bb{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 10:58:36,815 INFO [M:0;jenkins-hbase9:43835] server.AbstractConnector(383): Stopped ServerConnector@7c04e45c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:36,815 INFO [M:0;jenkins-hbase9:43835] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:36,816 INFO [M:0;jenkins-hbase9:43835] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@702e03cc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:36,816 INFO [M:0;jenkins-hbase9:43835] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@68b47818{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:36,816 INFO [M:0;jenkins-hbase9:43835] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,43835,1689159506481 2023-07-12 10:58:36,816 INFO [M:0;jenkins-hbase9:43835] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,43835,1689159506481; all regions closed. 2023-07-12 10:58:36,816 DEBUG [M:0;jenkins-hbase9:43835] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:36,816 INFO [M:0;jenkins-hbase9:43835] master.HMaster(1491): Stopping master jetty server 2023-07-12 10:58:36,817 INFO [M:0;jenkins-hbase9:43835] server.AbstractConnector(383): Stopped ServerConnector@f035965{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:36,818 DEBUG [M:0;jenkins-hbase9:43835] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 10:58:36,818 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 10:58:36,818 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159507433] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159507433,5,FailOnTimeoutGroup] 2023-07-12 10:58:36,818 DEBUG [M:0;jenkins-hbase9:43835] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 10:58:36,819 INFO [M:0;jenkins-hbase9:43835] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 10:58:36,819 INFO [M:0;jenkins-hbase9:43835] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 10:58:36,818 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159507433] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159507433,5,FailOnTimeoutGroup] 2023-07-12 10:58:36,820 INFO [M:0;jenkins-hbase9:43835] hbase.ChoreService(369): Chore service for: master/jenkins-hbase9:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:36,820 DEBUG [M:0;jenkins-hbase9:43835] master.HMaster(1512): Stopping service threads 2023-07-12 10:58:36,820 INFO [M:0;jenkins-hbase9:43835] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 10:58:36,821 ERROR [M:0;jenkins-hbase9:43835] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 10:58:36,821 INFO [M:0;jenkins-hbase9:43835] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 10:58:36,821 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 10:58:36,821 DEBUG [M:0;jenkins-hbase9:43835] zookeeper.ZKUtil(398): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 10:58:36,821 WARN [M:0;jenkins-hbase9:43835] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 10:58:36,821 INFO [M:0;jenkins-hbase9:43835] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 10:58:36,822 INFO [M:0;jenkins-hbase9:43835] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 10:58:36,822 DEBUG [M:0;jenkins-hbase9:43835] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 10:58:36,822 INFO [M:0;jenkins-hbase9:43835] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:36,822 DEBUG [M:0;jenkins-hbase9:43835] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:36,822 DEBUG [M:0;jenkins-hbase9:43835] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 10:58:36,822 DEBUG [M:0;jenkins-hbase9:43835] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:36,822 INFO [M:0;jenkins-hbase9:43835] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=45.34 KB heapSize=54.95 KB 2023-07-12 10:58:36,842 INFO [M:0;jenkins-hbase9:43835] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=45.34 KB at sequenceid=1006 (bloomFilter=true), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7cca7ce93d414c1fa1573b45dd7b32c2 2023-07-12 10:58:36,853 DEBUG [M:0;jenkins-hbase9:43835] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7cca7ce93d414c1fa1573b45dd7b32c2 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7cca7ce93d414c1fa1573b45dd7b32c2 2023-07-12 10:58:36,860 INFO [M:0;jenkins-hbase9:43835] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7cca7ce93d414c1fa1573b45dd7b32c2, entries=13, sequenceid=1006, filesize=7.2 K 2023-07-12 10:58:36,860 INFO [M:0;jenkins-hbase9:43835] regionserver.HRegion(2948): Finished flush of dataSize ~45.34 KB/46428, heapSize ~54.93 KB/56248, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 38ms, sequenceid=1006, compaction requested=false 2023-07-12 10:58:36,864 INFO [M:0;jenkins-hbase9:43835] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:36,864 DEBUG [M:0;jenkins-hbase9:43835] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:36,868 INFO [M:0;jenkins-hbase9:43835] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 10:58:36,868 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:36,869 INFO [M:0;jenkins-hbase9:43835] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:43835 2023-07-12 10:58:36,872 DEBUG [M:0;jenkins-hbase9:43835] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase9.apache.org,43835,1689159506481 already deleted, retry=false 2023-07-12 10:58:36,973 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:36,973 INFO [M:0;jenkins-hbase9:43835] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,43835,1689159506481; zookeeper connection closed. 2023-07-12 10:58:36,973 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:43835-0x1015920fb080010, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:36,974 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-12 10:58:37,694 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 10:58:38,976 INFO [Listener at localhost/44831] client.ConnectionUtils(127): master/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:38,976 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:38,977 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:38,977 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:38,977 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:38,977 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:38,977 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:38,978 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:42625 2023-07-12 10:58:38,978 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:38,980 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:38,981 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42625 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:38,986 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:426250x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:38,987 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42625-0x1015920fb08001c connected 2023-07-12 10:58:38,989 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:38,990 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:38,990 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:38,992 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42625 2023-07-12 10:58:38,992 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42625 2023-07-12 10:58:38,993 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42625 2023-07-12 10:58:38,997 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42625 2023-07-12 10:58:38,998 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42625 2023-07-12 10:58:39,000 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:39,000 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:39,000 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:39,000 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 10:58:39,000 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:39,000 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:39,001 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:39,001 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 41045 2023-07-12 10:58:39,001 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:39,005 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:39,005 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@468d9d18{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:39,006 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:39,006 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5d5ab32a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:39,133 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:39,134 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:39,134 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:39,135 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:58:39,136 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:39,137 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@c48016{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-41045-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3556505518915079840/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 10:58:39,138 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@1b8ca572{HTTP/1.1, (http/1.1)}{0.0.0.0:41045} 2023-07-12 10:58:39,139 INFO [Listener at localhost/44831] server.Server(415): Started @42831ms 2023-07-12 10:58:39,139 INFO [Listener at localhost/44831] master.HMaster(444): hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5, hbase.cluster.distributed=false 2023-07-12 10:58:39,140 DEBUG [pool-525-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-12 10:58:39,157 INFO [Listener at localhost/44831] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:39,157 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:39,157 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:39,157 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:39,158 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:39,158 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:39,158 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:39,159 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:41263 2023-07-12 10:58:39,159 INFO [Listener at localhost/44831] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:39,161 DEBUG [Listener at localhost/44831] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:39,162 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:39,164 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:39,166 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41263 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:39,170 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:412630x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:39,171 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:412630x0, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:39,171 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:412630x0, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:39,175 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:412630x0, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:39,178 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41263 2023-07-12 10:58:39,178 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41263-0x1015920fb08001d connected 2023-07-12 10:58:39,178 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41263 2023-07-12 10:58:39,184 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41263 2023-07-12 10:58:39,185 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41263 2023-07-12 10:58:39,185 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41263 2023-07-12 10:58:39,188 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:39,188 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:39,188 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:39,189 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:39,189 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:39,189 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:39,190 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:39,190 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 34589 2023-07-12 10:58:39,191 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:39,198 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:39,198 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@478af4cc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:39,199 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:39,199 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@ed0b40b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:39,343 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:39,344 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:39,344 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:39,345 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:39,354 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:39,355 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@54c460b4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-34589-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6111061581543896412/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:39,357 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@10f53854{HTTP/1.1, (http/1.1)}{0.0.0.0:34589} 2023-07-12 10:58:39,357 INFO [Listener at localhost/44831] server.Server(415): Started @43050ms 2023-07-12 10:58:39,371 INFO [Listener at localhost/44831] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:39,371 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:39,371 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:39,371 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:39,371 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:39,371 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:39,371 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:39,372 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:35789 2023-07-12 10:58:39,372 INFO [Listener at localhost/44831] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:39,385 DEBUG [Listener at localhost/44831] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:39,385 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:39,387 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:39,388 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35789 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:39,393 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:357890x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:39,394 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:357890x0, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:39,394 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35789-0x1015920fb08001e connected 2023-07-12 10:58:39,395 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:39,395 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:39,400 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35789 2023-07-12 10:58:39,400 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35789 2023-07-12 10:58:39,401 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35789 2023-07-12 10:58:39,408 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35789 2023-07-12 10:58:39,409 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35789 2023-07-12 10:58:39,412 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:39,412 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:39,412 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:39,413 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:39,413 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:39,413 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:39,414 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:39,414 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 41381 2023-07-12 10:58:39,415 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:39,418 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:39,418 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7ebc5a8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:39,418 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:39,419 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@38d6140c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:39,553 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:39,554 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:39,554 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:39,554 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:58:39,555 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:39,556 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5cbf1f2f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-41381-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3877174364898595678/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:39,558 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@576ff101{HTTP/1.1, (http/1.1)}{0.0.0.0:41381} 2023-07-12 10:58:39,558 INFO [Listener at localhost/44831] server.Server(415): Started @43251ms 2023-07-12 10:58:39,576 INFO [Listener at localhost/44831] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:39,577 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:39,577 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:39,577 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:39,577 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:39,577 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:39,577 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:39,578 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:39005 2023-07-12 10:58:39,578 INFO [Listener at localhost/44831] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:39,582 DEBUG [Listener at localhost/44831] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:39,582 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:39,583 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:39,584 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39005 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:39,588 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:390050x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:39,590 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:390050x0, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:39,590 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39005-0x1015920fb08001f connected 2023-07-12 10:58:39,590 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:39,591 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:39,600 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39005 2023-07-12 10:58:39,600 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39005 2023-07-12 10:58:39,601 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39005 2023-07-12 10:58:39,601 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39005 2023-07-12 10:58:39,603 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39005 2023-07-12 10:58:39,605 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:39,605 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:39,605 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:39,606 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:39,606 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:39,606 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:39,606 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:39,607 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 37657 2023-07-12 10:58:39,607 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:39,612 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:39,612 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41c22c3f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:39,613 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:39,613 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@76a41604{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:39,766 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:39,766 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:39,767 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:39,767 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:39,770 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:39,771 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1dc12c2b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-37657-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5806648890781510511/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:39,773 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@e64859e{HTTP/1.1, (http/1.1)}{0.0.0.0:37657} 2023-07-12 10:58:39,773 INFO [Listener at localhost/44831] server.Server(415): Started @43465ms 2023-07-12 10:58:39,782 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:39,797 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@5293f66e{HTTP/1.1, (http/1.1)}{0.0.0.0:46329} 2023-07-12 10:58:39,797 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(415): Started @43490ms 2023-07-12 10:58:39,797 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase9.apache.org,42625,1689159518976 2023-07-12 10:58:39,799 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 10:58:39,800 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase9.apache.org,42625,1689159518976 2023-07-12 10:58:39,801 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:39,801 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:39,803 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:39,803 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:39,806 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:39,810 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:39,810 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:39,810 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase9.apache.org,42625,1689159518976 from backup master directory 2023-07-12 10:58:39,811 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase9.apache.org,42625,1689159518976 2023-07-12 10:58:39,812 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 10:58:39,812 WARN [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:39,812 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase9.apache.org,42625,1689159518976 2023-07-12 10:58:39,842 INFO [master/jenkins-hbase9:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:39,884 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4318667d to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:39,893 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@40dac808, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:39,893 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:39,893 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 10:58:39,897 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:39,905 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,43835,1689159506481 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,43835,1689159506481-dead as it is dead 2023-07-12 10:58:39,910 INFO [master/jenkins-hbase9:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,43835,1689159506481-dead/jenkins-hbase9.apache.org%2C43835%2C1689159506481.1689159507266 2023-07-12 10:58:39,911 INFO [master/jenkins-hbase9:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,43835,1689159506481-dead/jenkins-hbase9.apache.org%2C43835%2C1689159506481.1689159507266 after 1ms 2023-07-12 10:58:39,911 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,43835,1689159506481-dead/jenkins-hbase9.apache.org%2C43835%2C1689159506481.1689159507266 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase9.apache.org%2C43835%2C1689159506481.1689159507266 2023-07-12 10:58:39,912 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,43835,1689159506481-dead 2023-07-12 10:58:39,913 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,42625,1689159518976 2023-07-12 10:58:39,916 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C42625%2C1689159518976, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,42625,1689159518976, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/oldWALs, maxLogs=10 2023-07-12 10:58:39,938 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:39,941 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:39,941 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:39,956 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/WALs/jenkins-hbase9.apache.org,42625,1689159518976/jenkins-hbase9.apache.org%2C42625%2C1689159518976.1689159519916 2023-07-12 10:58:39,957 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK]] 2023-07-12 10:58:39,958 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:39,958 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:39,958 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:39,958 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:39,959 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:39,964 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 10:58:39,964 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 10:58:39,977 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2e5db07848e94ff6bf97226c840d95d8 2023-07-12 10:58:39,990 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7cca7ce93d414c1fa1573b45dd7b32c2 2023-07-12 10:58:39,990 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:39,991 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-12 10:58:39,991 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase9.apache.org%2C43835%2C1689159506481.1689159507266 2023-07-12 10:58:39,997 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 128, firstSequenceIdInLog=896, maxSequenceIdInLog=1008, path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase9.apache.org%2C43835%2C1689159506481.1689159507266 2023-07-12 10:58:39,999 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase9.apache.org%2C43835%2C1689159506481.1689159507266 2023-07-12 10:58:40,010 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:40,016 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1008.seqid, newMaxSeqId=1008, maxSeqId=894 2023-07-12 10:58:40,017 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=1009; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11565694720, jitterRate=0.07713925838470459}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:40,017 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:40,019 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 10:58:40,021 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 10:58:40,021 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 10:58:40,021 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 10:58:40,022 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 10:58:40,035 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-12 10:58:40,037 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-12 10:58:40,037 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-12 10:58:40,037 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-12 10:58:40,037 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-12 10:58:40,038 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE 2023-07-12 10:58:40,038 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE 2023-07-12 10:58:40,039 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=14, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-12 10:58:40,039 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=21, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,39623,1689159484526, splitWal=true, meta=false 2023-07-12 10:58:40,039 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=22, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-12 10:58:40,040 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=23, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:40,040 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=26, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:40,040 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=29, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-12 10:58:40,041 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=30, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:40,041 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=51, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:40,041 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=72, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-12 10:58:40,042 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=73, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE 2023-07-12 10:58:40,042 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=74, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE 2023-07-12 10:58:40,042 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=79, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:40,042 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=80, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:40,042 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=83, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:40,043 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=86, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-12 10:58:40,043 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=87, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:40,043 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=88, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:40,044 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=91, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:40,045 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=94, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-12 10:58:40,045 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=95, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:40,045 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=98, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-12 10:58:40,045 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=99, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-12 10:58:40,046 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=100, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689159500013 type: FLUSH version: 2 ttl: 0 ) 2023-07-12 10:58:40,046 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=103, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:40,046 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=106, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-12 10:58:40,047 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-12 10:58:40,047 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=110, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-12 10:58:40,047 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=111, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-12 10:58:40,047 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=112, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:40,048 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=113, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:40,048 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=116, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:40,048 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=119, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-12 10:58:40,048 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=120, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-12 10:58:40,048 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=121, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,45597,1689159484713, splitWal=true, meta=false 2023-07-12 10:58:40,049 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=122, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,43635,1689159491271, splitWal=true, meta=false 2023-07-12 10:58:40,049 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=123, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,43117,1689159488336, splitWal=true, meta=true 2023-07-12 10:58:40,049 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=124, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,42501,1689159484335, splitWal=true, meta=false 2023-07-12 10:58:40,049 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=131, state=SUCCESS; CreateTableProcedure table=hbase:quota 2023-07-12 10:58:40,050 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 27 msec 2023-07-12 10:58:40,050 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 10:58:40,065 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-12 10:58:40,069 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase9.apache.org,41887,1689159507026, table=hbase:meta, region=1588230740 2023-07-12 10:58:40,075 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 3 possibly 'live' servers, and 0 'splitting'. 2023-07-12 10:58:40,134 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,41887,1689159507026 already deleted, retry=false 2023-07-12 10:58:40,134 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase9.apache.org,41887,1689159507026 on jenkins-hbase9.apache.org,42625,1689159518976 2023-07-12 10:58:40,135 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=134, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase9.apache.org,41887,1689159507026, splitWal=true, meta=true 2023-07-12 10:58:40,135 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=134 for jenkins-hbase9.apache.org,41887,1689159507026 (carryingMeta=true) jenkins-hbase9.apache.org,41887,1689159507026/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@3b9fc31c[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-12 10:58:40,209 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,33873,1689159506858 already deleted, retry=false 2023-07-12 10:58:40,210 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase9.apache.org,33873,1689159506858 on jenkins-hbase9.apache.org,42625,1689159518976 2023-07-12 10:58:40,210 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase9.apache.org,33873,1689159506858, splitWal=true, meta=false 2023-07-12 10:58:40,211 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=135 for jenkins-hbase9.apache.org,33873,1689159506858 (carryingMeta=false) jenkins-hbase9.apache.org,33873,1689159506858/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@6c9499ad[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-12 10:58:40,279 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,34455,1689159506648 already deleted, retry=false 2023-07-12 10:58:40,279 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase9.apache.org,34455,1689159506648 on jenkins-hbase9.apache.org,42625,1689159518976 2023-07-12 10:58:40,280 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=136, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase9.apache.org,34455,1689159506648, splitWal=true, meta=false 2023-07-12 10:58:40,280 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=136 for jenkins-hbase9.apache.org,34455,1689159506648 (carryingMeta=false) jenkins-hbase9.apache.org,34455,1689159506648/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@c46c8b2[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-12 10:58:40,280 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-12 10:58:40,281 INFO [master/jenkins-hbase9:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 10:58:40,281 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 10:58:40,282 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 10:58:40,283 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 10:58:40,284 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 10:58:40,348 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:40,348 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:40,348 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:40,348 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:40,348 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:40,348 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase9.apache.org,42625,1689159518976, sessionid=0x1015920fb08001c, setting cluster-up flag (Was=false) 2023-07-12 10:58:40,434 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 10:58:40,435 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,42625,1689159518976 2023-07-12 10:58:40,464 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 10:58:40,465 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,42625,1689159518976 2023-07-12 10:58:40,466 WARN [master/jenkins-hbase9:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/.hbase-snapshot/.tmp 2023-07-12 10:58:40,467 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 10:58:40,467 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 10:58:40,468 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-12 10:58:40,469 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 10:58:40,470 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:40,470 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 10:58:40,471 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:40,472 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase9.apache.org/172.31.2.10:41887 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:41887 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:40,474 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase9.apache.org/172.31.2.10:41887 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:41887 2023-07-12 10:58:40,475 INFO [RS:0;jenkins-hbase9:41263] regionserver.HRegionServer(951): ClusterId : 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:40,475 DEBUG [RS:0;jenkins-hbase9:41263] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:40,479 INFO [RS:1;jenkins-hbase9:35789] regionserver.HRegionServer(951): ClusterId : 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:40,479 INFO [RS:2;jenkins-hbase9:39005] regionserver.HRegionServer(951): ClusterId : 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:40,479 DEBUG [RS:1;jenkins-hbase9:35789] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:40,479 DEBUG [RS:2;jenkins-hbase9:39005] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:40,486 DEBUG [RS:0;jenkins-hbase9:41263] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:40,486 DEBUG [RS:0;jenkins-hbase9:41263] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:40,487 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 10:58:40,487 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 10:58:40,488 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 10:58:40,488 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 10:58:40,488 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:40,488 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:40,488 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:40,488 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:40,488 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase9:0, corePoolSize=10, maxPoolSize=10 2023-07-12 10:58:40,488 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,488 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:40,489 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,496 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689159550496 2023-07-12 10:58:40,496 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 10:58:40,497 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 10:58:40,497 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 10:58:40,498 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 10:58:40,498 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 10:58:40,498 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 10:58:40,499 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,501 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 10:58:40,502 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 10:58:40,502 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 10:58:40,503 DEBUG [RS:1;jenkins-hbase9:35789] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:40,503 DEBUG [RS:1;jenkins-hbase9:35789] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:40,504 DEBUG [RS:0;jenkins-hbase9:41263] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:40,508 DEBUG [PEWorker-3] master.DeadServer(103): Processing jenkins-hbase9.apache.org,33873,1689159506858; numProcessing=1 2023-07-12 10:58:40,508 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=135, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,33873,1689159506858, splitWal=true, meta=false 2023-07-12 10:58:40,508 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase9.apache.org,34455,1689159506648; numProcessing=2 2023-07-12 10:58:40,508 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=136, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,34455,1689159506648, splitWal=true, meta=false 2023-07-12 10:58:40,510 DEBUG [RS:1;jenkins-hbase9:35789] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:40,513 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase9.apache.org,41887,1689159507026; numProcessing=3 2023-07-12 10:58:40,513 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 10:58:40,514 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=134, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,41887,1689159507026, splitWal=true, meta=true 2023-07-12 10:58:40,514 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 10:58:40,515 DEBUG [RS:0;jenkins-hbase9:41263] zookeeper.ReadOnlyZKClient(139): Connect 0x44efe797 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:40,515 DEBUG [RS:2;jenkins-hbase9:39005] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:40,515 DEBUG [RS:2;jenkins-hbase9:39005] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:40,515 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=134, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,41887,1689159507026, splitWal=true, meta=true, isMeta: true 2023-07-12 10:58:40,517 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41887,1689159507026-splitting 2023-07-12 10:58:40,518 DEBUG [RS:2;jenkins-hbase9:39005] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:40,522 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159520515,5,FailOnTimeoutGroup] 2023-07-12 10:58:40,522 DEBUG [RS:1;jenkins-hbase9:35789] zookeeper.ReadOnlyZKClient(139): Connect 0x1ca0ac65 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:40,526 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159520526,5,FailOnTimeoutGroup] 2023-07-12 10:58:40,526 DEBUG [RS:2;jenkins-hbase9:39005] zookeeper.ReadOnlyZKClient(139): Connect 0x060d7140 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:40,526 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,527 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 10:58:40,527 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,527 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,528 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689159520528, completionTime=-1 2023-07-12 10:58:40,528 WARN [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-12 10:58:40,528 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-12 10:58:40,529 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41887,1689159507026-splitting dir is empty, no logs to split. 2023-07-12 10:58:40,530 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase9.apache.org,41887,1689159507026 WAL count=0, meta=true 2023-07-12 10:58:40,533 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41887,1689159507026-splitting dir is empty, no logs to split. 2023-07-12 10:58:40,534 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase9.apache.org,41887,1689159507026 WAL count=0, meta=true 2023-07-12 10:58:40,534 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase9.apache.org,41887,1689159507026 WAL splitting is done? wals=0, meta=true 2023-07-12 10:58:40,536 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 10:58:40,545 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 10:58:40,547 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-12 10:58:40,548 DEBUG [RS:2;jenkins-hbase9:39005] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8ca8ff1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:40,548 DEBUG [RS:2;jenkins-hbase9:39005] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cf51a30, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:40,548 DEBUG [RS:1;jenkins-hbase9:35789] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@12675181, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:40,548 DEBUG [RS:1;jenkins-hbase9:35789] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1658c715, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:40,558 DEBUG [RS:0;jenkins-hbase9:41263] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a28017a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:40,558 DEBUG [RS:0;jenkins-hbase9:41263] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e178ce7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:40,559 DEBUG [RS:2;jenkins-hbase9:39005] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase9:39005 2023-07-12 10:58:40,559 INFO [RS:2;jenkins-hbase9:39005] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:40,559 INFO [RS:2;jenkins-hbase9:39005] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:40,559 DEBUG [RS:2;jenkins-hbase9:39005] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:40,560 INFO [RS:2;jenkins-hbase9:39005] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,42625,1689159518976 with isa=jenkins-hbase9.apache.org/172.31.2.10:39005, startcode=1689159519576 2023-07-12 10:58:40,560 DEBUG [RS:2;jenkins-hbase9:39005] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:40,562 DEBUG [RS:1;jenkins-hbase9:35789] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase9:35789 2023-07-12 10:58:40,562 INFO [RS:1;jenkins-hbase9:35789] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:40,562 INFO [RS:1;jenkins-hbase9:35789] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:40,562 DEBUG [RS:1;jenkins-hbase9:35789] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:40,562 INFO [RS:1;jenkins-hbase9:35789] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,42625,1689159518976 with isa=jenkins-hbase9.apache.org/172.31.2.10:35789, startcode=1689159519370 2023-07-12 10:58:40,563 DEBUG [RS:1;jenkins-hbase9:35789] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:40,568 DEBUG [RS:0;jenkins-hbase9:41263] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase9:41263 2023-07-12 10:58:40,568 INFO [RS:0;jenkins-hbase9:41263] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:40,568 INFO [RS:0;jenkins-hbase9:41263] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:40,568 DEBUG [RS:0;jenkins-hbase9:41263] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:40,569 INFO [RS:0;jenkins-hbase9:41263] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,42625,1689159518976 with isa=jenkins-hbase9.apache.org/172.31.2.10:41263, startcode=1689159519156 2023-07-12 10:58:40,569 DEBUG [RS:0;jenkins-hbase9:41263] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:40,569 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:37925, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:40,571 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42625] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:40,571 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:40,571 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:53565, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:40,572 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:35767, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:40,573 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42625] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:40,572 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 10:58:40,573 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:40,573 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 10:58:40,573 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42625] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:40,573 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:40,573 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 10:58:40,573 DEBUG [RS:2;jenkins-hbase9:39005] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:40,573 DEBUG [RS:2;jenkins-hbase9:39005] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42757 2023-07-12 10:58:40,573 DEBUG [RS:1;jenkins-hbase9:35789] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:40,573 DEBUG [RS:0;jenkins-hbase9:41263] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:40,573 DEBUG [RS:1;jenkins-hbase9:35789] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42757 2023-07-12 10:58:40,574 DEBUG [RS:1;jenkins-hbase9:35789] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41045 2023-07-12 10:58:40,573 DEBUG [RS:2;jenkins-hbase9:39005] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41045 2023-07-12 10:58:40,573 DEBUG [RS:0;jenkins-hbase9:41263] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42757 2023-07-12 10:58:40,574 DEBUG [RS:0;jenkins-hbase9:41263] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41045 2023-07-12 10:58:40,578 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=50ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-12 10:58:40,579 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:40,580 DEBUG [RS:2;jenkins-hbase9:39005] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:40,580 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,41263,1689159519156] 2023-07-12 10:58:40,580 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,35789,1689159519370] 2023-07-12 10:58:40,580 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,39005,1689159519576] 2023-07-12 10:58:40,580 DEBUG [RS:0;jenkins-hbase9:41263] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:40,580 WARN [RS:2;jenkins-hbase9:39005] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:40,581 DEBUG [RS:1;jenkins-hbase9:35789] zookeeper.ZKUtil(162): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:40,581 WARN [RS:1;jenkins-hbase9:35789] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:40,581 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase9.apache.org/172.31.2.10:41887 this server is in the failed servers list 2023-07-12 10:58:40,581 WARN [RS:0;jenkins-hbase9:41263] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:40,581 INFO [RS:1;jenkins-hbase9:35789] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:40,581 INFO [RS:2;jenkins-hbase9:39005] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:40,581 INFO [RS:0;jenkins-hbase9:41263] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:40,581 DEBUG [RS:1;jenkins-hbase9:35789] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:40,582 DEBUG [RS:2;jenkins-hbase9:39005] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:40,582 DEBUG [RS:0;jenkins-hbase9:41263] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:40,593 DEBUG [RS:1;jenkins-hbase9:35789] zookeeper.ZKUtil(162): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:40,593 DEBUG [RS:2;jenkins-hbase9:39005] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:40,594 DEBUG [RS:0;jenkins-hbase9:41263] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:40,594 DEBUG [RS:2;jenkins-hbase9:39005] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:40,594 DEBUG [RS:1;jenkins-hbase9:35789] zookeeper.ZKUtil(162): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:40,594 DEBUG [RS:0;jenkins-hbase9:41263] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:40,594 DEBUG [RS:1;jenkins-hbase9:35789] zookeeper.ZKUtil(162): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:40,594 DEBUG [RS:2;jenkins-hbase9:39005] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:40,594 DEBUG [RS:0;jenkins-hbase9:41263] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:40,596 DEBUG [RS:1;jenkins-hbase9:35789] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:40,596 DEBUG [RS:2;jenkins-hbase9:39005] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:40,596 INFO [RS:1;jenkins-hbase9:35789] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:40,596 INFO [RS:2;jenkins-hbase9:39005] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:40,599 INFO [RS:1;jenkins-hbase9:35789] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:40,601 DEBUG [RS:0;jenkins-hbase9:41263] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:40,602 INFO [RS:0;jenkins-hbase9:41263] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:40,603 INFO [RS:2;jenkins-hbase9:39005] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:40,604 INFO [RS:1;jenkins-hbase9:35789] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:40,604 INFO [RS:1;jenkins-hbase9:35789] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,606 INFO [RS:2;jenkins-hbase9:39005] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:40,606 INFO [RS:0;jenkins-hbase9:41263] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:40,606 INFO [RS:2;jenkins-hbase9:39005] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,606 INFO [RS:1;jenkins-hbase9:35789] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:40,609 INFO [RS:0;jenkins-hbase9:41263] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:40,609 INFO [RS:0;jenkins-hbase9:41263] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,609 INFO [RS:2;jenkins-hbase9:39005] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:40,615 INFO [RS:0;jenkins-hbase9:41263] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:40,615 INFO [RS:1;jenkins-hbase9:35789] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,615 INFO [RS:2;jenkins-hbase9:39005] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,615 DEBUG [RS:2;jenkins-hbase9:39005] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,615 DEBUG [RS:1;jenkins-hbase9:35789] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,615 DEBUG [RS:2;jenkins-hbase9:39005] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,615 DEBUG [RS:1;jenkins-hbase9:35789] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,615 DEBUG [RS:2;jenkins-hbase9:39005] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,616 DEBUG [RS:1;jenkins-hbase9:35789] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,617 DEBUG [RS:2;jenkins-hbase9:39005] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,617 DEBUG [RS:1;jenkins-hbase9:35789] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,617 DEBUG [RS:2;jenkins-hbase9:39005] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,617 DEBUG [RS:1;jenkins-hbase9:35789] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,617 DEBUG [RS:2;jenkins-hbase9:39005] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:40,617 DEBUG [RS:1;jenkins-hbase9:35789] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:40,617 DEBUG [RS:2;jenkins-hbase9:39005] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,617 DEBUG [RS:1;jenkins-hbase9:35789] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,617 DEBUG [RS:2;jenkins-hbase9:39005] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,617 DEBUG [RS:1;jenkins-hbase9:35789] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,617 INFO [RS:0;jenkins-hbase9:41263] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,617 DEBUG [RS:1;jenkins-hbase9:35789] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,617 DEBUG [RS:2;jenkins-hbase9:39005] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,617 DEBUG [RS:1;jenkins-hbase9:35789] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,617 DEBUG [RS:2;jenkins-hbase9:39005] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,617 DEBUG [RS:0;jenkins-hbase9:41263] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,618 DEBUG [RS:0;jenkins-hbase9:41263] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,618 DEBUG [RS:0;jenkins-hbase9:41263] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,618 DEBUG [RS:0;jenkins-hbase9:41263] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,618 DEBUG [RS:0;jenkins-hbase9:41263] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,618 DEBUG [RS:0;jenkins-hbase9:41263] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:40,618 DEBUG [RS:0;jenkins-hbase9:41263] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,618 DEBUG [RS:0;jenkins-hbase9:41263] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,618 DEBUG [RS:0;jenkins-hbase9:41263] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,618 DEBUG [RS:0;jenkins-hbase9:41263] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:40,629 INFO [RS:0;jenkins-hbase9:41263] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,629 INFO [RS:0;jenkins-hbase9:41263] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,629 INFO [RS:0;jenkins-hbase9:41263] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,629 INFO [RS:2;jenkins-hbase9:39005] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,629 INFO [RS:2;jenkins-hbase9:39005] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,629 INFO [RS:2;jenkins-hbase9:39005] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,634 INFO [RS:1;jenkins-hbase9:35789] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,634 INFO [RS:1;jenkins-hbase9:35789] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,634 INFO [RS:1;jenkins-hbase9:35789] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,645 INFO [RS:0;jenkins-hbase9:41263] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:40,645 INFO [RS:0;jenkins-hbase9:41263] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,41263,1689159519156-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,648 INFO [RS:2;jenkins-hbase9:39005] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:40,649 INFO [RS:2;jenkins-hbase9:39005] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,39005,1689159519576-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,652 INFO [RS:1;jenkins-hbase9:35789] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:40,652 INFO [RS:1;jenkins-hbase9:35789] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,35789,1689159519370-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:40,662 INFO [RS:0;jenkins-hbase9:41263] regionserver.Replication(203): jenkins-hbase9.apache.org,41263,1689159519156 started 2023-07-12 10:58:40,662 INFO [RS:0;jenkins-hbase9:41263] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,41263,1689159519156, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:41263, sessionid=0x1015920fb08001d 2023-07-12 10:58:40,662 DEBUG [RS:0;jenkins-hbase9:41263] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:40,663 DEBUG [RS:0;jenkins-hbase9:41263] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:40,663 DEBUG [RS:0;jenkins-hbase9:41263] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,41263,1689159519156' 2023-07-12 10:58:40,663 DEBUG [RS:0;jenkins-hbase9:41263] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:40,663 DEBUG [RS:0;jenkins-hbase9:41263] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:40,664 DEBUG [RS:0;jenkins-hbase9:41263] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:40,664 DEBUG [RS:0;jenkins-hbase9:41263] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:40,664 DEBUG [RS:0;jenkins-hbase9:41263] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:40,664 DEBUG [RS:0;jenkins-hbase9:41263] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,41263,1689159519156' 2023-07-12 10:58:40,664 DEBUG [RS:0;jenkins-hbase9:41263] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:40,664 DEBUG [RS:0;jenkins-hbase9:41263] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:40,665 INFO [RS:2;jenkins-hbase9:39005] regionserver.Replication(203): jenkins-hbase9.apache.org,39005,1689159519576 started 2023-07-12 10:58:40,666 INFO [RS:2;jenkins-hbase9:39005] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,39005,1689159519576, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:39005, sessionid=0x1015920fb08001f 2023-07-12 10:58:40,666 DEBUG [RS:2;jenkins-hbase9:39005] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:40,666 DEBUG [RS:2;jenkins-hbase9:39005] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:40,666 DEBUG [RS:2;jenkins-hbase9:39005] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,39005,1689159519576' 2023-07-12 10:58:40,666 DEBUG [RS:2;jenkins-hbase9:39005] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:40,668 DEBUG [RS:0;jenkins-hbase9:41263] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:40,668 INFO [RS:0;jenkins-hbase9:41263] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:40,668 INFO [RS:0;jenkins-hbase9:41263] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:40,668 DEBUG [RS:2;jenkins-hbase9:39005] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:40,668 DEBUG [RS:2;jenkins-hbase9:39005] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:40,668 DEBUG [RS:2;jenkins-hbase9:39005] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:40,669 DEBUG [RS:2;jenkins-hbase9:39005] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:40,669 DEBUG [RS:2;jenkins-hbase9:39005] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,39005,1689159519576' 2023-07-12 10:58:40,669 DEBUG [RS:2;jenkins-hbase9:39005] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:40,669 DEBUG [RS:2;jenkins-hbase9:39005] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:40,669 INFO [RS:1;jenkins-hbase9:35789] regionserver.Replication(203): jenkins-hbase9.apache.org,35789,1689159519370 started 2023-07-12 10:58:40,669 INFO [RS:1;jenkins-hbase9:35789] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,35789,1689159519370, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:35789, sessionid=0x1015920fb08001e 2023-07-12 10:58:40,670 DEBUG [RS:1;jenkins-hbase9:35789] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:40,670 DEBUG [RS:1;jenkins-hbase9:35789] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:40,670 DEBUG [RS:2;jenkins-hbase9:39005] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:40,670 INFO [RS:2;jenkins-hbase9:39005] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:40,670 INFO [RS:2;jenkins-hbase9:39005] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:40,670 DEBUG [RS:1;jenkins-hbase9:35789] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,35789,1689159519370' 2023-07-12 10:58:40,670 DEBUG [RS:1;jenkins-hbase9:35789] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:40,670 DEBUG [RS:1;jenkins-hbase9:35789] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:40,671 DEBUG [RS:1;jenkins-hbase9:35789] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:40,671 DEBUG [RS:1;jenkins-hbase9:35789] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:40,671 DEBUG [RS:1;jenkins-hbase9:35789] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:40,671 DEBUG [RS:1;jenkins-hbase9:35789] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,35789,1689159519370' 2023-07-12 10:58:40,671 DEBUG [RS:1;jenkins-hbase9:35789] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:40,671 DEBUG [RS:1;jenkins-hbase9:35789] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:40,672 DEBUG [RS:1;jenkins-hbase9:35789] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:40,672 INFO [RS:1;jenkins-hbase9:35789] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:40,672 INFO [RS:1;jenkins-hbase9:35789] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:40,697 DEBUG [jenkins-hbase9:42625] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 10:58:40,698 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:40,698 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:40,698 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:40,698 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:40,698 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:40,699 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,41263,1689159519156, state=OPENING 2023-07-12 10:58:40,702 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:58:40,702 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:40,702 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=137, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,41263,1689159519156}] 2023-07-12 10:58:40,770 INFO [RS:0;jenkins-hbase9:41263] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C41263%2C1689159519156, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41263,1689159519156, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:40,771 INFO [RS:2;jenkins-hbase9:39005] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C39005%2C1689159519576, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,39005,1689159519576, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:40,774 INFO [RS:1;jenkins-hbase9:35789] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C35789%2C1689159519370, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,35789,1689159519370, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:40,783 WARN [ReadOnlyZKClient-127.0.0.1:49301@0x4318667d] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 10:58:40,784 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:40,786 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:40,787 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:40,790 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:40,790 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:38356, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:40,799 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41263] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 172.31.2.10:38356 deadline: 1689159580790, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:40,799 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:40,800 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:40,800 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:40,805 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:40,805 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:40,805 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:40,806 INFO [RS:0;jenkins-hbase9:41263] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41263,1689159519156/jenkins-hbase9.apache.org%2C41263%2C1689159519156.1689159520770 2023-07-12 10:58:40,809 INFO [RS:2;jenkins-hbase9:39005] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,39005,1689159519576/jenkins-hbase9.apache.org%2C39005%2C1689159519576.1689159520772 2023-07-12 10:58:40,809 DEBUG [RS:0;jenkins-hbase9:41263] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK], DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK]] 2023-07-12 10:58:40,809 INFO [RS:1;jenkins-hbase9:35789] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,35789,1689159519370/jenkins-hbase9.apache.org%2C35789%2C1689159519370.1689159520774 2023-07-12 10:58:40,809 DEBUG [RS:2;jenkins-hbase9:39005] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK], DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK]] 2023-07-12 10:58:40,810 DEBUG [RS:1;jenkins-hbase9:35789] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK], DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK]] 2023-07-12 10:58:40,856 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:40,857 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:40,859 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:38364, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:40,862 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 10:58:40,862 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:40,864 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C41263%2C1689159519156.meta, suffix=.meta, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41263,1689159519156, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:40,878 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:40,878 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:40,878 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:40,880 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41263,1689159519156/jenkins-hbase9.apache.org%2C41263%2C1689159519156.meta.1689159520864.meta 2023-07-12 10:58:40,880 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK]] 2023-07-12 10:58:40,880 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:40,880 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:40,880 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 10:58:40,880 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 10:58:40,880 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 10:58:40,881 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:40,881 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 10:58:40,881 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 10:58:40,882 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 10:58:40,883 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info 2023-07-12 10:58:40,883 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info 2023-07-12 10:58:40,883 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 10:58:40,891 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b454ddcab3764fdc8af2fffba894338f 2023-07-12 10:58:40,895 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b49a1b1f3cde4a5f98186fb585abc133 2023-07-12 10:58:40,900 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f03719a871834b2389c705f4609fdcac 2023-07-12 10:58:40,900 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/f03719a871834b2389c705f4609fdcac 2023-07-12 10:58:40,900 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:40,900 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 10:58:40,901 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:40,901 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:40,902 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 10:58:40,907 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0ce0a95344df49e981f2d4e98996f0d6 2023-07-12 10:58:40,907 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier/0ce0a95344df49e981f2d4e98996f0d6 2023-07-12 10:58:40,907 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:40,907 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 10:58:40,908 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table 2023-07-12 10:58:40,908 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table 2023-07-12 10:58:40,909 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 10:58:40,915 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3d9afdddc7e1488ba950023ac0c57891 2023-07-12 10:58:40,915 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/3d9afdddc7e1488ba950023ac0c57891 2023-07-12 10:58:40,919 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/a69ee6b8f1cc4724a5b721bd5c87f29a 2023-07-12 10:58:40,923 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/d1ffc01c5dfe485096e4d50a2844f7e1 2023-07-12 10:58:40,923 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:40,924 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740 2023-07-12 10:58:40,925 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740 2023-07-12 10:58:40,927 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 10:58:40,928 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 10:58:40,928 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=175; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11676979200, jitterRate=0.08750343322753906}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 10:58:40,929 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 10:58:40,929 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=138, masterSystemTime=1689159520856 2023-07-12 10:58:40,930 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-12 10:58:40,931 DEBUG [RS:0;jenkins-hbase9:41263-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-12 10:58:40,931 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-12 10:58:40,931 DEBUG [RS:0;jenkins-hbase9:41263-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-12 10:58:40,934 DEBUG [RS:0;jenkins-hbase9:41263-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 27339 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-12 10:58:40,934 DEBUG [RS:0;jenkins-hbase9:41263-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 16770 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-12 10:58:40,934 DEBUG [RS:0;jenkins-hbase9:41263-shortCompactions-0] regionserver.HStore(1912): 1588230740/info is initiating minor compaction (all files) 2023-07-12 10:58:40,935 DEBUG [RS:0;jenkins-hbase9:41263-longCompactions-0] regionserver.HStore(1912): 1588230740/table is initiating minor compaction (all files) 2023-07-12 10:58:40,935 INFO [RS:0;jenkins-hbase9:41263-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 1588230740/info in hbase:meta,,1.1588230740 2023-07-12 10:58:40,935 INFO [RS:0;jenkins-hbase9:41263-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 1588230740/table in hbase:meta,,1.1588230740 2023-07-12 10:58:40,935 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 10:58:40,935 INFO [RS:0;jenkins-hbase9:41263-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b49a1b1f3cde4a5f98186fb585abc133, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/f03719a871834b2389c705f4609fdcac, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b454ddcab3764fdc8af2fffba894338f] into tmpdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp, totalSize=26.7 K 2023-07-12 10:58:40,935 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 10:58:40,935 INFO [RS:0;jenkins-hbase9:41263-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/a69ee6b8f1cc4724a5b721bd5c87f29a, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/3d9afdddc7e1488ba950023ac0c57891, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/d1ffc01c5dfe485096e4d50a2844f7e1] into tmpdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp, totalSize=16.4 K 2023-07-12 10:58:40,936 DEBUG [RS:0;jenkins-hbase9:41263-shortCompactions-0] compactions.Compactor(207): Compacting b49a1b1f3cde4a5f98186fb585abc133, keycount=22, bloomtype=NONE, size=7.3 K, encoding=NONE, compression=NONE, seqNum=16, earliestPutTs=1689159487691 2023-07-12 10:58:40,936 DEBUG [RS:0;jenkins-hbase9:41263-longCompactions-0] compactions.Compactor(207): Compacting a69ee6b8f1cc4724a5b721bd5c87f29a, keycount=4, bloomtype=NONE, size=4.8 K, encoding=NONE, compression=NONE, seqNum=16, earliestPutTs=1689159487593 2023-07-12 10:58:40,936 DEBUG [RS:0;jenkins-hbase9:41263-shortCompactions-0] compactions.Compactor(207): Compacting f03719a871834b2389c705f4609fdcac, keycount=62, bloomtype=NONE, size=11.7 K, encoding=NONE, compression=NONE, seqNum=156, earliestPutTs=1689159489684 2023-07-12 10:58:40,936 DEBUG [RS:0;jenkins-hbase9:41263-longCompactions-0] compactions.Compactor(207): Compacting 3d9afdddc7e1488ba950023ac0c57891, keycount=23, bloomtype=NONE, size=7.0 K, encoding=NONE, compression=NONE, seqNum=156, earliestPutTs=9223372036854775807 2023-07-12 10:58:40,937 DEBUG [RS:0;jenkins-hbase9:41263-shortCompactions-0] compactions.Compactor(207): Compacting b454ddcab3764fdc8af2fffba894338f, keycount=26, bloomtype=NONE, size=7.7 K, encoding=NONE, compression=NONE, seqNum=171, earliestPutTs=1689159512016 2023-07-12 10:58:40,937 DEBUG [RS:0;jenkins-hbase9:41263-longCompactions-0] compactions.Compactor(207): Compacting d1ffc01c5dfe485096e4d50a2844f7e1, keycount=2, bloomtype=NONE, size=4.7 K, encoding=NONE, compression=NONE, seqNum=171, earliestPutTs=1689159513040 2023-07-12 10:58:40,937 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,41263,1689159519156, state=OPEN 2023-07-12 10:58:40,939 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:58:40,939 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:40,942 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=137 2023-07-12 10:58:40,942 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=137, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,41263,1689159519156 in 237 msec 2023-07-12 10:58:40,943 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=134 2023-07-12 10:58:40,944 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=134, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 406 msec 2023-07-12 10:58:40,953 INFO [RS:0;jenkins-hbase9:41263-shortCompactions-0] throttle.PressureAwareThroughputController(145): 1588230740#info#compaction#18 average throughput is 5.17 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-12 10:58:40,957 INFO [RS:0;jenkins-hbase9:41263-longCompactions-0] throttle.PressureAwareThroughputController(145): 1588230740#table#compaction#19 average throughput is 0.26 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-12 10:58:40,976 DEBUG [RS:0;jenkins-hbase9:41263-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/info/9746370004b64e0092ed4491146b79dd as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/9746370004b64e0092ed4491146b79dd 2023-07-12 10:58:40,978 DEBUG [RS:0;jenkins-hbase9:41263-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/table/8d1790e7af7f47f693209cca99ed2577 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/8d1790e7af7f47f693209cca99ed2577 2023-07-12 10:58:40,983 DEBUG [RS:0;jenkins-hbase9:41263-shortCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 10:58:40,984 INFO [RS:0;jenkins-hbase9:41263-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1588230740/info of 1588230740 into 9746370004b64e0092ed4491146b79dd(size=10.1 K), total size for store is 10.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-12 10:58:40,984 DEBUG [RS:0;jenkins-hbase9:41263-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1588230740: 2023-07-12 10:58:40,984 INFO [RS:0;jenkins-hbase9:41263-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:meta,,1.1588230740, storeName=1588230740/info, priority=13, startTime=1689159520929; duration=0sec 2023-07-12 10:58:40,984 DEBUG [RS:0;jenkins-hbase9:41263-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-12 10:58:40,985 INFO [RS:0;jenkins-hbase9:41263-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1588230740/table of 1588230740 into 8d1790e7af7f47f693209cca99ed2577(size=4.9 K), total size for store is 4.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-12 10:58:40,985 DEBUG [RS:0;jenkins-hbase9:41263-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1588230740: 2023-07-12 10:58:40,985 INFO [RS:0;jenkins-hbase9:41263-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:meta,,1.1588230740, storeName=1588230740/table, priority=13, startTime=1689159520931; duration=0sec 2023-07-12 10:58:40,985 DEBUG [RS:0;jenkins-hbase9:41263-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-12 10:58:41,113 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:41,113 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase9.apache.org/172.31.2.10:34455 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:34455 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:41,114 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase9.apache.org/172.31.2.10:34455 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:34455 2023-07-12 10:58:41,217 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase9.apache.org/172.31.2.10:34455 this server is in the failed servers list 2023-07-12 10:58:41,422 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase9.apache.org/172.31.2.10:34455 this server is in the failed servers list 2023-07-12 10:58:41,730 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase9.apache.org/172.31.2.10:34455 this server is in the failed servers list 2023-07-12 10:58:42,081 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1553ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1503ms 2023-07-12 10:58:42,238 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase9.apache.org/172.31.2.10:34455 this server is in the failed servers list 2023-07-12 10:58:42,353 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-12 10:58:43,244 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase9.apache.org/172.31.2.10:34455 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:34455 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:43,246 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase9.apache.org/172.31.2.10:34455 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:34455 2023-07-12 10:58:43,583 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3055ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3005ms 2023-07-12 10:58:45,036 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4508ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-12 10:58:45,036 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 10:58:45,039 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPEN, lastHost=jenkins-hbase9.apache.org,33873,1689159506858, regionLocation=jenkins-hbase9.apache.org,33873,1689159506858, openSeqNum=27 2023-07-12 10:58:45,039 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=b71f0d9015a7b2292849acab5e81c0c6, regionState=OPEN, lastHost=jenkins-hbase9.apache.org,33873,1689159506858, regionLocation=jenkins-hbase9.apache.org,33873,1689159506858, openSeqNum=2 2023-07-12 10:58:45,039 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=0832c48321f808d3b4d6fb68605b1448, regionState=OPEN, lastHost=jenkins-hbase9.apache.org,34455,1689159506648, regionLocation=jenkins-hbase9.apache.org,34455,1689159506648, openSeqNum=83 2023-07-12 10:58:45,039 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 10:58:45,039 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689159585039 2023-07-12 10:58:45,039 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689159645039 2023-07-12 10:58:45,039 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-12 10:58:45,055 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase9.apache.org,41887,1689159507026 had 1 regions 2023-07-12 10:58:45,056 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase9.apache.org,33873,1689159506858 had 2 regions 2023-07-12 10:58:45,056 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,42625,1689159518976-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:45,055 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase9.apache.org,34455,1689159506648 had 1 regions 2023-07-12 10:58:45,056 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,42625,1689159518976-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:45,056 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,42625,1689159518976-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:45,056 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase9:42625, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:45,056 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:45,056 WARN [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. is NOT online; state={e5addb24bba6e8be9d4cddc12a45ff25 state=OPEN, ts=1689159525039, server=jenkins-hbase9.apache.org,33873,1689159506858}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-12 10:58:45,058 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=136, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,34455,1689159506648, splitWal=true, meta=false, isMeta: false 2023-07-12 10:58:45,058 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=135, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,33873,1689159506858, splitWal=true, meta=false, isMeta: false 2023-07-12 10:58:45,058 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=134, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,41887,1689159507026, splitWal=true, meta=true, isMeta: false 2023-07-12 10:58:45,059 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,34455,1689159506648-splitting 2023-07-12 10:58:45,060 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,34455,1689159506648-splitting dir is empty, no logs to split. 2023-07-12 10:58:45,061 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase9.apache.org,34455,1689159506648 WAL count=0, meta=false 2023-07-12 10:58:45,061 DEBUG [PEWorker-2] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,33873,1689159506858-splitting 2023-07-12 10:58:45,062 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,33873,1689159506858-splitting dir is empty, no logs to split. 2023-07-12 10:58:45,062 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase9.apache.org,33873,1689159506858 WAL count=0, meta=false 2023-07-12 10:58:45,063 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41887,1689159507026-splitting dir is empty, no logs to split. 2023-07-12 10:58:45,063 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase9.apache.org,41887,1689159507026 WAL count=0, meta=false 2023-07-12 10:58:45,064 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,34455,1689159506648-splitting dir is empty, no logs to split. 2023-07-12 10:58:45,064 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase9.apache.org,34455,1689159506648 WAL count=0, meta=false 2023-07-12 10:58:45,064 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase9.apache.org,34455,1689159506648 WAL splitting is done? wals=0, meta=false 2023-07-12 10:58:45,065 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,33873,1689159506858-splitting dir is empty, no logs to split. 2023-07-12 10:58:45,065 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase9.apache.org,33873,1689159506858 WAL count=0, meta=false 2023-07-12 10:58:45,065 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase9.apache.org,33873,1689159506858 WAL splitting is done? wals=0, meta=false 2023-07-12 10:58:45,066 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41887,1689159507026-splitting dir is empty, no logs to split. 2023-07-12 10:58:45,066 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase9.apache.org,41887,1689159507026 WAL count=0, meta=false 2023-07-12 10:58:45,066 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase9.apache.org,41887,1689159507026 WAL splitting is done? wals=0, meta=false 2023-07-12 10:58:45,066 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase9.apache.org,34455,1689159506648 failed, ignore...File hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,34455,1689159506648-splitting does not exist. 2023-07-12 10:58:45,067 WARN [master/jenkins-hbase9:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase9.apache.org,33873,1689159506858/hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25., unknown_server=jenkins-hbase9.apache.org,33873,1689159506858/hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6., unknown_server=jenkins-hbase9.apache.org,34455,1689159506648/hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:45,067 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=136, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, ASSIGN}] 2023-07-12 10:58:45,067 INFO [PEWorker-2] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase9.apache.org,33873,1689159506858 failed, ignore...File hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,33873,1689159506858-splitting does not exist. 2023-07-12 10:58:45,068 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=136, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, ASSIGN 2023-07-12 10:58:45,068 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, ASSIGN}, {pid=141, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=b71f0d9015a7b2292849acab5e81c0c6, ASSIGN}] 2023-07-12 10:58:45,077 INFO [PEWorker-5] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase9.apache.org,41887,1689159507026 after splitting done 2023-07-12 10:58:45,077 DEBUG [PEWorker-5] master.DeadServer(114): Removed jenkins-hbase9.apache.org,41887,1689159507026 from processing; numProcessing=2 2023-07-12 10:58:45,077 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=136, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-12 10:58:45,077 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, ASSIGN 2023-07-12 10:58:45,078 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=b71f0d9015a7b2292849acab5e81c0c6, ASSIGN 2023-07-12 10:58:45,078 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-12 10:58:45,078 DEBUG [jenkins-hbase9:42625] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 10:58:45,079 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:45,079 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:45,079 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:45,079 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=141, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=b71f0d9015a7b2292849acab5e81c0c6, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-12 10:58:45,079 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:45,079 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-12 10:58:45,081 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=134, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,41887,1689159507026, splitWal=true, meta=true in 4.9430 sec 2023-07-12 10:58:45,082 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:45,082 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:45,082 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159525082"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159525082"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159525082"}]},"ts":"1689159525082"} 2023-07-12 10:58:45,082 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159525082"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159525082"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159525082"}]},"ts":"1689159525082"} 2023-07-12 10:58:45,085 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=139, state=RUNNABLE; OpenRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,35789,1689159519370}] 2023-07-12 10:58:45,088 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=140, state=RUNNABLE; OpenRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,41263,1689159519156}] 2023-07-12 10:58:45,232 DEBUG [jenkins-hbase9:42625] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 10:58:45,232 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:45,232 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:45,232 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:45,232 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:45,232 DEBUG [jenkins-hbase9:42625] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:45,233 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=b71f0d9015a7b2292849acab5e81c0c6, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:45,234 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689159525233"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159525233"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159525233"}]},"ts":"1689159525233"} 2023-07-12 10:58:45,235 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=141, state=RUNNABLE; OpenRegionProcedure b71f0d9015a7b2292849acab5e81c0c6, server=jenkins-hbase9.apache.org,39005,1689159519576}] 2023-07-12 10:58:45,238 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:45,238 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:45,239 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:47942, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:45,243 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:45,243 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0832c48321f808d3b4d6fb68605b1448, NAME => 'hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:45,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:45,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. service=MultiRowMutationService 2023-07-12 10:58:45,244 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 10:58:45,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:45,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:45,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:45,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:45,245 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:45,245 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e5addb24bba6e8be9d4cddc12a45ff25, NAME => 'hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:45,245 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:45,246 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:45,246 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:45,246 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:45,249 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:45,249 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:45,250 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m 2023-07-12 10:58:45,250 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m 2023-07-12 10:58:45,250 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0832c48321f808d3b4d6fb68605b1448 columnFamilyName m 2023-07-12 10:58:45,250 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info 2023-07-12 10:58:45,250 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info 2023-07-12 10:58:45,251 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e5addb24bba6e8be9d4cddc12a45ff25 columnFamilyName info 2023-07-12 10:58:45,257 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/d15ac85af42147dfab1746d5f141cc5a 2023-07-12 10:58:45,257 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(310): Store=e5addb24bba6e8be9d4cddc12a45ff25/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:45,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:45,260 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:45,264 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:45,265 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e5addb24bba6e8be9d4cddc12a45ff25; next sequenceid=31; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10684481760, jitterRate=-0.004930093884468079}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:45,266 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:45,268 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase9.apache.org/172.31.2.10:34455 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:34455 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:45,268 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase9.apache.org/172.31.2.10:34455 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:34455 2023-07-12 10:58:45,269 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/bffd8719b2904374a71e69e548411438 2023-07-12 10:58:45,278 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=46, started=4171 ms ago, cancelled=false, msg=Call to address=jenkins-hbase9.apache.org/172.31.2.10:34455 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:34455, details=row '\x00' on table 'hbase:rsgroup' at region=hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448., hostname=jenkins-hbase9.apache.org,34455,1689159506648, seqNum=83, see https://s.apache.org/timeout, exception=java.net.ConnectException: Call to address=jenkins-hbase9.apache.org/172.31.2.10:34455 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:34455 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:186) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:385) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:99) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:398) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:368) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:396) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:195) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$300(NettyRpcConnection.java:76) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:296) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:287) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:674) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:693) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:34455 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:45,278 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25., pid=143, masterSystemTime=1689159525241 2023-07-12 10:58:45,282 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:45,282 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/ff557c1700754976a0534ed7a4fce455 2023-07-12 10:58:45,282 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:45,282 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(310): Store=0832c48321f808d3b4d6fb68605b1448/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:45,282 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPEN, openSeqNum=31, regionLocation=jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:45,282 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159525282"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159525282"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159525282"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159525282"}]},"ts":"1689159525282"} 2023-07-12 10:58:45,283 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:45,284 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:45,286 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=140 2023-07-12 10:58:45,286 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=140, state=SUCCESS; OpenRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,41263,1689159519156 in 196 msec 2023-07-12 10:58:45,287 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, ASSIGN in 218 msec 2023-07-12 10:58:45,288 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:45,289 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 0832c48321f808d3b4d6fb68605b1448; next sequenceid=91; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7b3884e9, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:45,289 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:45,290 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448., pid=142, masterSystemTime=1689159525238 2023-07-12 10:58:45,294 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=OPEN, openSeqNum=91, regionLocation=jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:45,294 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159525294"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159525294"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159525294"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159525294"}]},"ts":"1689159525294"} 2023-07-12 10:58:45,297 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=139 2023-07-12 10:58:45,297 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=139, state=SUCCESS; OpenRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,35789,1689159519370 in 211 msec 2023-07-12 10:58:45,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:45,299 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:45,299 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=136 2023-07-12 10:58:45,299 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase9.apache.org,34455,1689159506648 after splitting done 2023-07-12 10:58:45,299 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=136, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, ASSIGN in 230 msec 2023-07-12 10:58:45,300 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase9.apache.org,34455,1689159506648 from processing; numProcessing=1 2023-07-12 10:58:45,301 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,34455,1689159506648, splitWal=true, meta=false in 5.0210 sec 2023-07-12 10:58:45,387 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:45,387 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:45,389 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:38458, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:45,392 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:45,392 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b71f0d9015a7b2292849acab5e81c0c6, NAME => 'hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:45,393 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:45,393 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:45,393 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:45,393 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:45,394 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:45,395 DEBUG [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/q 2023-07-12 10:58:45,395 DEBUG [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/q 2023-07-12 10:58:45,395 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b71f0d9015a7b2292849acab5e81c0c6 columnFamilyName q 2023-07-12 10:58:45,396 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] regionserver.HStore(310): Store=b71f0d9015a7b2292849acab5e81c0c6/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:45,396 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:45,397 DEBUG [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/u 2023-07-12 10:58:45,397 DEBUG [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/u 2023-07-12 10:58:45,397 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b71f0d9015a7b2292849acab5e81c0c6 columnFamilyName u 2023-07-12 10:58:45,398 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] regionserver.HStore(310): Store=b71f0d9015a7b2292849acab5e81c0c6/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:45,399 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:45,400 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:45,402 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-12 10:58:45,404 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:45,405 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened b71f0d9015a7b2292849acab5e81c0c6; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11351050240, jitterRate=0.05714893341064453}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-12 10:58:45,405 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for b71f0d9015a7b2292849acab5e81c0c6: 2023-07-12 10:58:45,406 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6., pid=144, masterSystemTime=1689159525387 2023-07-12 10:58:45,410 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:45,411 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:45,411 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=b71f0d9015a7b2292849acab5e81c0c6, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:45,412 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689159525411"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159525411"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159525411"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159525411"}]},"ts":"1689159525411"} 2023-07-12 10:58:45,417 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=141 2023-07-12 10:58:45,417 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=141, state=SUCCESS; OpenRegionProcedure b71f0d9015a7b2292849acab5e81c0c6, server=jenkins-hbase9.apache.org,39005,1689159519576 in 178 msec 2023-07-12 10:58:45,419 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=135 2023-07-12 10:58:45,419 INFO [PEWorker-1] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase9.apache.org,33873,1689159506858 after splitting done 2023-07-12 10:58:45,419 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=b71f0d9015a7b2292849acab5e81c0c6, ASSIGN in 349 msec 2023-07-12 10:58:45,419 DEBUG [PEWorker-1] master.DeadServer(114): Removed jenkins-hbase9.apache.org,33873,1689159506858 from processing; numProcessing=0 2023-07-12 10:58:45,421 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,33873,1689159506858, splitWal=true, meta=false in 5.2090 sec 2023-07-12 10:58:46,057 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-12 10:58:46,076 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 10:58:46,079 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 10:58:46,079 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 6.267sec 2023-07-12 10:58:46,079 INFO [master/jenkins-hbase9:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 10:58:46,080 INFO [master/jenkins-hbase9:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 10:58:46,080 INFO [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 10:58:46,080 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,42625,1689159518976-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 10:58:46,080 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,42625,1689159518976-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 10:58:46,080 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 10:58:46,085 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(139): Connect 0x56c9bc71 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:46,090 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d5b9b6a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:46,091 DEBUG [hconnection-0xd464a40-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:46,093 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:56880, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:46,097 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-12 10:58:46,097 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x56c9bc71 to 127.0.0.1:49301 2023-07-12 10:58:46,097 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:46,098 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase9.apache.org:42625 after: jenkins-hbase9.apache.org:42625 2023-07-12 10:58:46,099 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(139): Connect 0x33f2d925 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:46,103 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cd99151, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:46,103 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:46,388 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 10:58:46,596 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 10:58:46,597 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-12 10:58:46,602 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 10:58:48,023 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$NodeFailoverWorker(712): Not transferring queue since we are shutting down 2023-07-12 10:58:49,304 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:49,306 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:47954, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:49,308 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 10:58:49,308 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 10:58:49,323 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:49,324 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:49,324 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:49,327 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-12 10:58:49,327 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 10:58:49,407 DEBUG [Listener at localhost/44831] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 10:58:49,408 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:60770, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 10:58:49,410 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-12 10:58:49,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(492): Client=jenkins//172.31.2.10 set balanceSwitch=false 2023-07-12 10:58:49,411 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(139): Connect 0x015b285a to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:49,419 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1f101fdf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:49,420 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:49,422 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:49,424 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015920fb080027 connected 2023-07-12 10:58:49,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:49,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:49,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:49,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:49,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:49,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:49,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:49,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:49,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:49,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:49,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:49,439 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 10:58:49,454 INFO [Listener at localhost/44831] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:49,454 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:49,454 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:49,454 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:49,454 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:49,454 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:49,454 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:49,455 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:44255 2023-07-12 10:58:49,455 INFO [Listener at localhost/44831] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:49,456 DEBUG [Listener at localhost/44831] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:49,457 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:49,458 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:49,458 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44255 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:49,462 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:442550x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:49,464 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44255-0x1015920fb080028 connected 2023-07-12 10:58:49,464 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:49,465 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 10:58:49,466 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:49,466 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44255 2023-07-12 10:58:49,466 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44255 2023-07-12 10:58:49,466 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44255 2023-07-12 10:58:49,469 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44255 2023-07-12 10:58:49,469 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44255 2023-07-12 10:58:49,471 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:49,471 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:49,471 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:49,471 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:49,471 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:49,471 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:49,472 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:49,472 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 39875 2023-07-12 10:58:49,472 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:49,475 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:49,475 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@798dfac2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:49,475 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:49,475 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@74dcb463{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:49,587 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:49,588 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:49,588 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:49,588 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:58:49,589 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:49,589 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4c297362{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-39875-hbase-server-2_4_18-SNAPSHOT_jar-_-any-683308287700330714/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:49,591 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@2ad1f356{HTTP/1.1, (http/1.1)}{0.0.0.0:39875} 2023-07-12 10:58:49,591 INFO [Listener at localhost/44831] server.Server(415): Started @53283ms 2023-07-12 10:58:49,594 INFO [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(951): ClusterId : 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:49,594 DEBUG [RS:3;jenkins-hbase9:44255] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:49,595 DEBUG [RS:3;jenkins-hbase9:44255] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:49,595 DEBUG [RS:3;jenkins-hbase9:44255] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:49,597 DEBUG [RS:3;jenkins-hbase9:44255] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:49,600 DEBUG [RS:3;jenkins-hbase9:44255] zookeeper.ReadOnlyZKClient(139): Connect 0x6f290853 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:49,604 DEBUG [RS:3;jenkins-hbase9:44255] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61eefbc6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:49,604 DEBUG [RS:3;jenkins-hbase9:44255] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@65d16323, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:49,613 DEBUG [RS:3;jenkins-hbase9:44255] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase9:44255 2023-07-12 10:58:49,613 INFO [RS:3;jenkins-hbase9:44255] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:49,613 INFO [RS:3;jenkins-hbase9:44255] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:49,614 DEBUG [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:49,614 INFO [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,42625,1689159518976 with isa=jenkins-hbase9.apache.org/172.31.2.10:44255, startcode=1689159529453 2023-07-12 10:58:49,614 DEBUG [RS:3;jenkins-hbase9:44255] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:49,616 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:53157, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.11 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:49,616 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42625] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:49,616 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:49,617 DEBUG [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:49,617 DEBUG [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42757 2023-07-12 10:58:49,617 DEBUG [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41045 2023-07-12 10:58:49,618 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:49,618 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:49,618 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:49,619 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:49,619 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:49,619 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:49,620 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:49,620 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:49,620 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,44255,1689159529453] 2023-07-12 10:58:49,621 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:49,621 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:49,621 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:49,621 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:49,622 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:49,622 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:49,622 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:49,622 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:49,623 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:49,623 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 10:58:49,624 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 10:58:49,624 DEBUG [RS:3;jenkins-hbase9:44255] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:49,624 WARN [RS:3;jenkins-hbase9:44255] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:49,624 INFO [RS:3;jenkins-hbase9:44255] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:49,624 DEBUG [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:49,630 DEBUG [RS:3;jenkins-hbase9:44255] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:49,630 DEBUG [RS:3;jenkins-hbase9:44255] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:49,630 DEBUG [RS:3;jenkins-hbase9:44255] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:49,631 DEBUG [RS:3;jenkins-hbase9:44255] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:49,632 DEBUG [RS:3;jenkins-hbase9:44255] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:49,632 INFO [RS:3;jenkins-hbase9:44255] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:49,633 INFO [RS:3;jenkins-hbase9:44255] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:49,633 INFO [RS:3;jenkins-hbase9:44255] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:49,633 INFO [RS:3;jenkins-hbase9:44255] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:49,637 INFO [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:49,639 INFO [RS:3;jenkins-hbase9:44255] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:49,640 DEBUG [RS:3;jenkins-hbase9:44255] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:49,640 DEBUG [RS:3;jenkins-hbase9:44255] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:49,640 DEBUG [RS:3;jenkins-hbase9:44255] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:49,640 DEBUG [RS:3;jenkins-hbase9:44255] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:49,640 DEBUG [RS:3;jenkins-hbase9:44255] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:49,641 DEBUG [RS:3;jenkins-hbase9:44255] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:49,641 DEBUG [RS:3;jenkins-hbase9:44255] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:49,641 DEBUG [RS:3;jenkins-hbase9:44255] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:49,641 DEBUG [RS:3;jenkins-hbase9:44255] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:49,641 DEBUG [RS:3;jenkins-hbase9:44255] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:49,643 INFO [RS:3;jenkins-hbase9:44255] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:49,643 INFO [RS:3;jenkins-hbase9:44255] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:49,644 INFO [RS:3;jenkins-hbase9:44255] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:49,657 INFO [RS:3;jenkins-hbase9:44255] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:49,657 INFO [RS:3;jenkins-hbase9:44255] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,44255,1689159529453-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:49,667 INFO [RS:3;jenkins-hbase9:44255] regionserver.Replication(203): jenkins-hbase9.apache.org,44255,1689159529453 started 2023-07-12 10:58:49,667 INFO [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,44255,1689159529453, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:44255, sessionid=0x1015920fb080028 2023-07-12 10:58:49,668 DEBUG [RS:3;jenkins-hbase9:44255] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:49,668 DEBUG [RS:3;jenkins-hbase9:44255] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:49,668 DEBUG [RS:3;jenkins-hbase9:44255] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,44255,1689159529453' 2023-07-12 10:58:49,668 DEBUG [RS:3;jenkins-hbase9:44255] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:49,668 DEBUG [RS:3;jenkins-hbase9:44255] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:49,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:49,668 DEBUG [RS:3;jenkins-hbase9:44255] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:49,669 DEBUG [RS:3;jenkins-hbase9:44255] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:49,669 DEBUG [RS:3;jenkins-hbase9:44255] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:49,669 DEBUG [RS:3;jenkins-hbase9:44255] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,44255,1689159529453' 2023-07-12 10:58:49,669 DEBUG [RS:3;jenkins-hbase9:44255] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:49,669 DEBUG [RS:3;jenkins-hbase9:44255] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:49,669 DEBUG [RS:3;jenkins-hbase9:44255] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:49,670 INFO [RS:3;jenkins-hbase9:44255] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:49,670 INFO [RS:3;jenkins-hbase9:44255] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:49,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:49,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:49,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:49,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:49,677 DEBUG [hconnection-0x3a0ec6e9-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:49,679 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:56884, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:49,686 DEBUG [hconnection-0x3a0ec6e9-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:49,688 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:47956, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:49,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:49,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:49,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:42625] to rsgroup master 2023-07-12 10:58:49,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:42625 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:49,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] ipc.CallRunner(144): callId: 25 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:60770 deadline: 1689160729692, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:42625 is either offline or it does not exist. 2023-07-12 10:58:49,693 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:42625 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor63.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:42625 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:49,695 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:49,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:49,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:49,697 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:35789, jenkins-hbase9.apache.org:39005, jenkins-hbase9.apache.org:41263, jenkins-hbase9.apache.org:44255], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:49,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:49,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:49,750 INFO [Listener at localhost/44831] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=555 (was 517) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x060d7140-SendThread(127.0.0.1:49301) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42625 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x6f290853 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/356426229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741898_1074, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase9:44255-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase9:44255Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42625 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp392500467-2098 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42625 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:42757 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1234087232) connection to localhost/127.0.0.1:42757 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp569243963-1772 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1771827885.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:42757 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:42757 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1549009812-1843 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1771827885.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741901_1077, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1093849043_17 at /127.0.0.1:53416 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741899_1075] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1781302255-1807 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741900_1076, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp569243963-1775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x1ca0ac65-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42625 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1549009812-1845 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1781302255-1808 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp473583657-1747 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-3dbea4c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1234087232) connection to localhost/127.0.0.1:42757 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43835,1689159506481 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x3a0ec6e9-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x4318667d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp392500467-2097 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42625 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: region-location-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x483e2599-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp569243963-1773-acceptor-0@7c6125fb-ServerConnector@10f53854{HTTP/1.1, (http/1.1)}{0.0.0.0:34589} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1027353404_17 at /127.0.0.1:41692 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x33f2d925-SendThread(127.0.0.1:49301) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:1;jenkins-hbase9:35789 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741902_1078, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x015b285a-SendThread(127.0.0.1:49301) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp392500467-2092 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1771827885.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2081394496_17 at /127.0.0.1:37714 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741901_1077] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.10@localhost:42757 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741899_1075, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp473583657-1743 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2081394496_17 at /127.0.0.1:53098 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741901_1077] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase9:35789-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp569243963-1774 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1781302255-1805 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741899_1075, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1714881690-1832 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1771827885.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1781302255-1809 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741898_1074, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_810542760_17 at /127.0.0.1:53408 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741898_1074] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase9:44255 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase9:41263-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp569243963-1777 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5-prefix:jenkins-hbase9.apache.org,41263,1689159519156 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-476dd00f-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_810542760_17 at /127.0.0.1:37692 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741898_1074] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1549009812-1847-acceptor-0@40b3e079-ServerConnector@5293f66e{HTTP/1.1, (http/1.1)}{0.0.0.0:46329} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x015b285a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Session-HouseKeeper-2f9ad330-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1781302255-1806 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-5dc2d022-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x4318667d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/356426229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1836073772_17 at /127.0.0.1:53096 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741900_1076] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x483e2599 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/356426229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp392500467-2093-acceptor-0@6dcb680c-ServerConnector@2ad1f356{HTTP/1.1, (http/1.1)}{0.0.0.0:39875} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x6f290853-SendThread(127.0.0.1:49301) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp392500467-2095 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase9:35789Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase9:39005Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1027353404_17 at /127.0.0.1:50692 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x700af9c3-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase9:42625 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3a0ec6e9-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1549009812-1850 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1771827885.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1714881690-1836 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp392500467-2099 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp473583657-1748 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x44efe797-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1836073772_17 at /127.0.0.1:53424 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741900_1076] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1714881690-1837 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159520526 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: qtp569243963-1779 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5-prefix:jenkins-hbase9.apache.org,39005,1689159519576 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5-prefix:jenkins-hbase9.apache.org,35789,1689159519370 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1093849043_17 at /127.0.0.1:37698 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741899_1075] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp473583657-1742-acceptor-0@2f4ef1ba-ServerConnector@1b8ca572{HTTP/1.1, (http/1.1)}{0.0.0.0:41045} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-55b343c7-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5-prefix:jenkins-hbase9.apache.org,41263,1689159519156.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData-prefix:jenkins-hbase9.apache.org,42625,1689159518976 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741898_1074, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp473583657-1746 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase9:42625 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x6f290853-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp473583657-1745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase9:41263-shortCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase9:41263 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1714881690-1835 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp569243963-1776 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741902_1078, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x700af9c3-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1781302255-1803-acceptor-0@6b0b3f75-ServerConnector@576ff101{HTTP/1.1, (http/1.1)}{0.0.0.0:41381} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x483e2599-SendThread(127.0.0.1:49301) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x060d7140-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp473583657-1741 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1771827885.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1714881690-1839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x4318667d-SendThread(127.0.0.1:49301) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1093849043_17 at /127.0.0.1:53090 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741899_1075] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1234087232) connection to localhost/127.0.0.1:42757 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1549009812-1846 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1771827885.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42625 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1549009812-1844 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1771827885.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x700af9c3-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x1ca0ac65 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/356426229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1836073772_17 at /127.0.0.1:37710 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741900_1076] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-17-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp569243963-1778 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:42757 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1093849043_17 at /127.0.0.1:37726 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741902_1078] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x1ca0ac65-SendThread(127.0.0.1:49301) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase9:41263Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741899_1075, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-17-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase9:39005 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp473583657-1744 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1549009812-1848 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase9:39005-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1781302255-1804 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42625 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x44efe797 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/356426229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x33f2d925-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741900_1076, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1781302255-1802 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1771827885.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741900_1076, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp392500467-2094 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741902_1078, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x44efe797-SendThread(127.0.0.1:49301) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741901_1077, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1714881690-1838 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42625 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159520515 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2081394496_17 at /127.0.0.1:53438 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741901_1077] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1234087232) connection to localhost/127.0.0.1:42757 from jenkins.hfs.11 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1093849043_17 at /127.0.0.1:53448 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741902_1078] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:42757 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x33f2d925 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/356426229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_810542760_17 at /127.0.0.1:53078 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741898_1074] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1946597163-172.31.2.10-1689159478370:blk_1073741901_1077, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1714881690-1833-acceptor-0@7d5acb38-ServerConnector@e64859e{HTTP/1.1, (http/1.1)}{0.0.0.0:37657} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x700af9c3-metaLookup-shared--pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x015b285a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/356426229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1093849043_17 at /127.0.0.1:53114 [Receiving block BP-1946597163-172.31.2.10-1689159478370:blk_1073741902_1078] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp392500467-2096 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp1549009812-1849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42625 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1714881690-1834 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49301@0x060d7140 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/356426229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=869 (was 811) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=270 (was 312), ProcessCount=170 (was 170), AvailableMemoryMB=8022 (was 8219) 2023-07-12 10:58:49,753 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=555 is superior to 500 2023-07-12 10:58:49,772 INFO [RS:3;jenkins-hbase9:44255] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C44255%2C1689159529453, suffix=, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,44255,1689159529453, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:49,775 INFO [Listener at localhost/44831] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=555, OpenFileDescriptor=869, MaxFileDescriptor=60000, SystemLoadAverage=270, ProcessCount=170, AvailableMemoryMB=8021 2023-07-12 10:58:49,775 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=555 is superior to 500 2023-07-12 10:58:49,775 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(132): testClearDeadServers 2023-07-12 10:58:49,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:49,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:49,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:49,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:49,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:49,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:49,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:49,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:49,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:49,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:49,790 DEBUG [RS-EventLoopGroup-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:49,790 DEBUG [RS-EventLoopGroup-17-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:49,790 DEBUG [RS-EventLoopGroup-17-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:49,792 INFO [RS:3;jenkins-hbase9:44255] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,44255,1689159529453/jenkins-hbase9.apache.org%2C44255%2C1689159529453.1689159529772 2023-07-12 10:58:49,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:49,792 DEBUG [RS:3;jenkins-hbase9:44255] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK], DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK]] 2023-07-12 10:58:49,794 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:49,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:49,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:49,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:49,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:49,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:49,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:49,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:49,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:42625] to rsgroup master 2023-07-12 10:58:49,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:42625 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:49,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] ipc.CallRunner(144): callId: 53 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:60770 deadline: 1689160729814, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:42625 is either offline or it does not exist. 2023-07-12 10:58:49,815 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:42625 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor63.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:42625 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:49,818 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:49,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:49,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:49,819 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:35789, jenkins-hbase9.apache.org:39005, jenkins-hbase9.apache.org:41263, jenkins-hbase9.apache.org:44255], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:49,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:49,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:49,820 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBasics(214): testClearDeadServers 2023-07-12 10:58:49,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:49,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:49,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup Group_testClearDeadServers_942975590 2023-07-12 10:58:49,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:49,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_942975590 2023-07-12 10:58:49,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:49,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:49,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:49,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:49,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:49,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:35789, jenkins-hbase9.apache.org:39005, jenkins-hbase9.apache.org:41263] to rsgroup Group_testClearDeadServers_942975590 2023-07-12 10:58:49,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:49,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_942975590 2023-07-12 10:58:49,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:49,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:49,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(238): Moving server region 0832c48321f808d3b4d6fb68605b1448, which do not belong to RSGroup Group_testClearDeadServers_942975590 2023-07-12 10:58:49,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] procedure2.ProcedureExecutor(1029): Stored pid=145, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE 2023-07-12 10:58:49,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(238): Moving server region b71f0d9015a7b2292849acab5e81c0c6, which do not belong to RSGroup Group_testClearDeadServers_942975590 2023-07-12 10:58:49,842 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE 2023-07-12 10:58:49,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:quota, region=b71f0d9015a7b2292849acab5e81c0c6, REOPEN/MOVE 2023-07-12 10:58:49,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(238): Moving server region e5addb24bba6e8be9d4cddc12a45ff25, which do not belong to RSGroup Group_testClearDeadServers_942975590 2023-07-12 10:58:49,843 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:quota, region=b71f0d9015a7b2292849acab5e81c0c6, REOPEN/MOVE 2023-07-12 10:58:49,843 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:49,843 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159529843"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159529843"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159529843"}]},"ts":"1689159529843"} 2023-07-12 10:58:49,845 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=148, ppid=145, state=RUNNABLE; CloseRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,35789,1689159519370}] 2023-07-12 10:58:49,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] procedure2.ProcedureExecutor(1029): Stored pid=147, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE 2023-07-12 10:58:49,845 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=b71f0d9015a7b2292849acab5e81c0c6, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:49,845 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE 2023-07-12 10:58:49,845 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689159529845"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159529845"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159529845"}]},"ts":"1689159529845"} 2023-07-12 10:58:49,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testClearDeadServers_942975590 2023-07-12 10:58:49,847 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:49,847 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159529847"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159529847"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159529847"}]},"ts":"1689159529847"} 2023-07-12 10:58:49,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] procedure2.ProcedureExecutor(1029): Stored pid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-12 10:58:49,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(286): Moving 4 region(s) to group default, current retry=0 2023-07-12 10:58:49,848 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-12 10:58:49,849 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,41263,1689159519156, state=CLOSING 2023-07-12 10:58:49,852 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:58:49,852 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:49,852 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=149, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,41263,1689159519156}] 2023-07-12 10:58:49,854 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=146, state=RUNNABLE; CloseRegionProcedure b71f0d9015a7b2292849acab5e81c0c6, server=jenkins-hbase9.apache.org,39005,1689159519576}] 2023-07-12 10:58:49,854 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=147, state=RUNNABLE; CloseRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,41263,1689159519156}] 2023-07-12 10:58:49,855 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=151, ppid=146, state=RUNNABLE; CloseRegionProcedure b71f0d9015a7b2292849acab5e81c0c6, server=jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:49,858 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=152, ppid=147, state=RUNNABLE; CloseRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:49,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:50,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 0832c48321f808d3b4d6fb68605b1448, disabling compactions & flushes 2023-07-12 10:58:50,001 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:50,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:50,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. after waiting 0 ms 2023-07-12 10:58:50,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:50,001 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 0832c48321f808d3b4d6fb68605b1448 1/1 column families, dataSize=2.22 KB heapSize=3.71 KB 2023-07-12 10:58:50,010 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-12 10:58:50,010 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 10:58:50,010 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 10:58:50,010 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 10:58:50,010 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 10:58:50,010 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 10:58:50,011 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.82 KB heapSize=7 KB 2023-07-12 10:58:50,082 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.22 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/73dee99a570c4214990ad5de3fad4284 2023-07-12 10:58:50,086 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.82 KB at sequenceid=188 (bloomFilter=false), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/info/ff084a9e73214911ba66a900a6b65693 2023-07-12 10:58:50,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 73dee99a570c4214990ad5de3fad4284 2023-07-12 10:58:50,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/73dee99a570c4214990ad5de3fad4284 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/73dee99a570c4214990ad5de3fad4284 2023-07-12 10:58:50,097 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/info/ff084a9e73214911ba66a900a6b65693 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/ff084a9e73214911ba66a900a6b65693 2023-07-12 10:58:50,098 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 73dee99a570c4214990ad5de3fad4284 2023-07-12 10:58:50,098 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/73dee99a570c4214990ad5de3fad4284, entries=5, sequenceid=101, filesize=5.3 K 2023-07-12 10:58:50,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.22 KB/2272, heapSize ~3.70 KB/3784, currentSize=0 B/0 for 0832c48321f808d3b4d6fb68605b1448 in 98ms, sequenceid=101, compaction requested=true 2023-07-12 10:58:50,121 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/ff084a9e73214911ba66a900a6b65693, entries=33, sequenceid=188, filesize=8.6 K 2023-07-12 10:58:50,123 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.82 KB/3912, heapSize ~6.48 KB/6640, currentSize=0 B/0 for 1588230740 in 113ms, sequenceid=188, compaction requested=false 2023-07-12 10:58:50,137 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b49a1b1f3cde4a5f98186fb585abc133, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/f03719a871834b2389c705f4609fdcac, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b454ddcab3764fdc8af2fffba894338f] to archive 2023-07-12 10:58:50,140 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=90 2023-07-12 10:58:50,141 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:50,142 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-12 10:58:50,142 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:50,142 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:50,142 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 0832c48321f808d3b4d6fb68605b1448 move to jenkins-hbase9.apache.org,44255,1689159529453 record at close sequenceid=101 2023-07-12 10:58:50,144 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=148, ppid=145, state=RUNNABLE; CloseRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:50,144 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:50,145 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b49a1b1f3cde4a5f98186fb585abc133 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/meta/1588230740/info/b49a1b1f3cde4a5f98186fb585abc133 2023-07-12 10:58:50,147 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/f03719a871834b2389c705f4609fdcac to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/meta/1588230740/info/f03719a871834b2389c705f4609fdcac 2023-07-12 10:58:50,148 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/b454ddcab3764fdc8af2fffba894338f to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/meta/1588230740/info/b454ddcab3764fdc8af2fffba894338f 2023-07-12 10:58:50,163 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/a69ee6b8f1cc4724a5b721bd5c87f29a, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/3d9afdddc7e1488ba950023ac0c57891, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/d1ffc01c5dfe485096e4d50a2844f7e1] to archive 2023-07-12 10:58:50,164 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-12 10:58:50,166 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/a69ee6b8f1cc4724a5b721bd5c87f29a to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/meta/1588230740/table/a69ee6b8f1cc4724a5b721bd5c87f29a 2023-07-12 10:58:50,167 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/3d9afdddc7e1488ba950023ac0c57891 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/meta/1588230740/table/3d9afdddc7e1488ba950023ac0c57891 2023-07-12 10:58:50,169 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/d1ffc01c5dfe485096e4d50a2844f7e1 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/meta/1588230740/table/d1ffc01c5dfe485096e4d50a2844f7e1 2023-07-12 10:58:50,175 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/recovered.edits/191.seqid, newMaxSeqId=191, maxSeqId=174 2023-07-12 10:58:50,175 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:50,176 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:50,176 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 10:58:50,176 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase9.apache.org,44255,1689159529453 record at close sequenceid=188 2023-07-12 10:58:50,178 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-12 10:58:50,178 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-12 10:58:50,181 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=149 2023-07-12 10:58:50,181 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=149, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,41263,1689159519156 in 326 msec 2023-07-12 10:58:50,182 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=149, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,44255,1689159529453; forceNewPlan=false, retain=false 2023-07-12 10:58:50,333 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,44255,1689159529453, state=OPENING 2023-07-12 10:58:50,334 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:58:50,334 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:50,334 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=149, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,44255,1689159529453}] 2023-07-12 10:58:50,488 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:50,488 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:50,489 INFO [RS-EventLoopGroup-17-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:41224, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:50,493 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 10:58:50,493 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:50,495 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C44255%2C1689159529453.meta, suffix=.meta, logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,44255,1689159529453, archiveDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs, maxLogs=32 2023-07-12 10:58:50,515 DEBUG [RS-EventLoopGroup-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK] 2023-07-12 10:58:50,515 DEBUG [RS-EventLoopGroup-17-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK] 2023-07-12 10:58:50,515 DEBUG [RS-EventLoopGroup-17-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK] 2023-07-12 10:58:50,517 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,44255,1689159529453/jenkins-hbase9.apache.org%2C44255%2C1689159529453.meta.1689159530496.meta 2023-07-12 10:58:50,517 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44321,DS-ed5dbd85-7310-4bee-b864-55ba5c2ef214,DISK], DatanodeInfoWithStorage[127.0.0.1:36995,DS-18996c26-134b-4ae1-9bfa-bd02893d59d3,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-0b38dffd-2c06-4426-af3d-52cb26a8ce73,DISK]] 2023-07-12 10:58:50,518 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:50,518 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:50,518 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 10:58:50,518 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 10:58:50,518 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 10:58:50,518 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:50,518 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 10:58:50,518 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 10:58:50,519 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 10:58:50,520 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info 2023-07-12 10:58:50,520 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info 2023-07-12 10:58:50,521 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 10:58:50,527 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/9746370004b64e0092ed4491146b79dd 2023-07-12 10:58:50,532 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/ff084a9e73214911ba66a900a6b65693 2023-07-12 10:58:50,532 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:50,532 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 10:58:50,533 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:50,533 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:50,534 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 10:58:50,545 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0ce0a95344df49e981f2d4e98996f0d6 2023-07-12 10:58:50,545 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/rep_barrier/0ce0a95344df49e981f2d4e98996f0d6 2023-07-12 10:58:50,545 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:50,545 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 10:58:50,546 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table 2023-07-12 10:58:50,546 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table 2023-07-12 10:58:50,547 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 10:58:50,555 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/table/8d1790e7af7f47f693209cca99ed2577 2023-07-12 10:58:50,555 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:50,556 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740 2023-07-12 10:58:50,557 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740 2023-07-12 10:58:50,560 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 10:58:50,561 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 10:58:50,562 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=192; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9632633120, jitterRate=-0.10289113223552704}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 10:58:50,562 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 10:58:50,563 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=153, masterSystemTime=1689159530488 2023-07-12 10:58:50,567 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 10:58:50,568 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 10:58:50,568 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,44255,1689159529453, state=OPEN 2023-07-12 10:58:50,570 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:58:50,570 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:50,571 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=CLOSED 2023-07-12 10:58:50,571 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159530571"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159530571"}]},"ts":"1689159530571"} 2023-07-12 10:58:50,571 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41263] ipc.CallRunner(144): callId: 65 service: ClientService methodName: Mutate size: 213 connection: 172.31.2.10:38356 deadline: 1689159590571, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=44255 startCode=1689159529453. As of locationSeqNum=188. 2023-07-12 10:58:50,573 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=149 2023-07-12 10:58:50,573 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=149, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,44255,1689159529453 in 236 msec 2023-07-12 10:58:50,576 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=149, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 728 msec 2023-07-12 10:58:50,673 DEBUG [PEWorker-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:50,675 INFO [RS-EventLoopGroup-17-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:41234, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:50,678 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=145 2023-07-12 10:58:50,678 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=145, state=SUCCESS; CloseRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,35789,1689159519370 in 831 msec 2023-07-12 10:58:50,679 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=145, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,44255,1689159529453; forceNewPlan=false, retain=false 2023-07-12 10:58:50,722 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:50,730 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e5addb24bba6e8be9d4cddc12a45ff25, disabling compactions & flushes 2023-07-12 10:58:50,730 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:50,730 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:50,730 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. after waiting 0 ms 2023-07-12 10:58:50,730 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:50,731 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:50,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing b71f0d9015a7b2292849acab5e81c0c6, disabling compactions & flushes 2023-07-12 10:58:50,748 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:50,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:50,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. after waiting 0 ms 2023-07-12 10:58:50,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:50,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 10:58:50,793 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/recovered.edits/33.seqid, newMaxSeqId=33, maxSeqId=30 2023-07-12 10:58:50,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:50,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:50,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding e5addb24bba6e8be9d4cddc12a45ff25 move to jenkins-hbase9.apache.org,44255,1689159529453 record at close sequenceid=31 2023-07-12 10:58:50,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:50,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for b71f0d9015a7b2292849acab5e81c0c6: 2023-07-12 10:58:50,800 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding b71f0d9015a7b2292849acab5e81c0c6 move to jenkins-hbase9.apache.org,44255,1689159529453 record at close sequenceid=5 2023-07-12 10:58:50,803 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:50,804 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=CLOSED 2023-07-12 10:58:50,804 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159530804"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159530804"}]},"ts":"1689159530804"} 2023-07-12 10:58:50,806 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:50,806 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=b71f0d9015a7b2292849acab5e81c0c6, regionState=CLOSED 2023-07-12 10:58:50,806 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689159530806"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159530806"}]},"ts":"1689159530806"} 2023-07-12 10:58:50,809 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=147 2023-07-12 10:58:50,809 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=147, state=SUCCESS; CloseRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,41263,1689159519156 in 952 msec 2023-07-12 10:58:50,810 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=147, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,44255,1689159529453; forceNewPlan=false, retain=false 2023-07-12 10:58:50,810 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:50,810 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159530810"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159530810"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159530810"}]},"ts":"1689159530810"} 2023-07-12 10:58:50,815 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=146 2023-07-12 10:58:50,815 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=154, ppid=145, state=RUNNABLE; OpenRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,44255,1689159529453}] 2023-07-12 10:58:50,815 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; CloseRegionProcedure b71f0d9015a7b2292849acab5e81c0c6, server=jenkins-hbase9.apache.org,39005,1689159519576 in 954 msec 2023-07-12 10:58:50,816 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:50,816 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159530815"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159530815"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159530815"}]},"ts":"1689159530815"} 2023-07-12 10:58:50,818 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=146, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=b71f0d9015a7b2292849acab5e81c0c6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,44255,1689159529453; forceNewPlan=false, retain=false 2023-07-12 10:58:50,819 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=147, state=RUNNABLE; OpenRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,44255,1689159529453}] 2023-07-12 10:58:50,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] procedure.ProcedureSyncWait(216): waitFor pid=145 2023-07-12 10:58:50,969 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=b71f0d9015a7b2292849acab5e81c0c6, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:50,969 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689159530968"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159530968"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159530968"}]},"ts":"1689159530968"} 2023-07-12 10:58:50,972 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=146, state=RUNNABLE; OpenRegionProcedure b71f0d9015a7b2292849acab5e81c0c6, server=jenkins-hbase9.apache.org,44255,1689159529453}] 2023-07-12 10:58:50,981 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:50,981 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0832c48321f808d3b4d6fb68605b1448, NAME => 'hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:50,981 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:50,981 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. service=MultiRowMutationService 2023-07-12 10:58:50,981 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 10:58:50,982 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:50,982 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:50,982 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:50,982 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:50,994 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:50,997 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m 2023-07-12 10:58:50,997 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m 2023-07-12 10:58:50,998 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0832c48321f808d3b4d6fb68605b1448 columnFamilyName m 2023-07-12 10:58:51,010 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 73dee99a570c4214990ad5de3fad4284 2023-07-12 10:58:51,010 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/73dee99a570c4214990ad5de3fad4284 2023-07-12 10:58:51,024 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/bffd8719b2904374a71e69e548411438 2023-07-12 10:58:51,031 DEBUG [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/ff557c1700754976a0534ed7a4fce455 2023-07-12 10:58:51,032 INFO [StoreOpener-0832c48321f808d3b4d6fb68605b1448-1] regionserver.HStore(310): Store=0832c48321f808d3b4d6fb68605b1448/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:51,032 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:51,034 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:51,037 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:51,039 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 0832c48321f808d3b4d6fb68605b1448; next sequenceid=105; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@50506bf, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:51,039 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:51,040 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448., pid=154, masterSystemTime=1689159530969 2023-07-12 10:58:51,040 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-12 10:58:51,043 DEBUG [RS:3;jenkins-hbase9:44255-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-12 10:58:51,044 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:51,044 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:51,045 DEBUG [RS:3;jenkins-hbase9:44255-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15762 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-12 10:58:51,045 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:51,045 DEBUG [RS:3;jenkins-hbase9:44255-shortCompactions-0] regionserver.HStore(1912): 0832c48321f808d3b4d6fb68605b1448/m is initiating minor compaction (all files) 2023-07-12 10:58:51,045 INFO [RS:3;jenkins-hbase9:44255-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 0832c48321f808d3b4d6fb68605b1448/m in hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:51,045 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e5addb24bba6e8be9d4cddc12a45ff25, NAME => 'hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:51,045 INFO [RS:3;jenkins-hbase9:44255-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/bffd8719b2904374a71e69e548411438, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/ff557c1700754976a0534ed7a4fce455, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/73dee99a570c4214990ad5de3fad4284] into tmpdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp, totalSize=15.4 K 2023-07-12 10:58:51,045 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:51,045 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:51,045 DEBUG [RS:3;jenkins-hbase9:44255-shortCompactions-0] compactions.Compactor(207): Compacting bffd8719b2904374a71e69e548411438, keycount=2, bloomtype=ROW, size=5.1 K, encoding=NONE, compression=NONE, seqNum=79, earliestPutTs=1689159503562 2023-07-12 10:58:51,045 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:51,045 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:51,046 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=0832c48321f808d3b4d6fb68605b1448, regionState=OPEN, openSeqNum=105, regionLocation=jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:51,046 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159531045"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159531045"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159531045"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159531045"}]},"ts":"1689159531045"} 2023-07-12 10:58:51,046 DEBUG [RS:3;jenkins-hbase9:44255-shortCompactions-0] compactions.Compactor(207): Compacting ff557c1700754976a0534ed7a4fce455, keycount=2, bloomtype=ROW, size=5.0 K, encoding=NONE, compression=NONE, seqNum=87, earliestPutTs=1689159516227 2023-07-12 10:58:51,046 DEBUG [RS:3;jenkins-hbase9:44255-shortCompactions-0] compactions.Compactor(207): Compacting 73dee99a570c4214990ad5de3fad4284, keycount=5, bloomtype=ROW, size=5.3 K, encoding=NONE, compression=NONE, seqNum=101, earliestPutTs=1689159529838 2023-07-12 10:58:51,051 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=154, resume processing ppid=145 2023-07-12 10:58:51,051 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=145, state=SUCCESS; OpenRegionProcedure 0832c48321f808d3b4d6fb68605b1448, server=jenkins-hbase9.apache.org,44255,1689159529453 in 233 msec 2023-07-12 10:58:51,052 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=145, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0832c48321f808d3b4d6fb68605b1448, REOPEN/MOVE in 1.2090 sec 2023-07-12 10:58:51,054 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:51,055 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info 2023-07-12 10:58:51,055 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info 2023-07-12 10:58:51,056 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e5addb24bba6e8be9d4cddc12a45ff25 columnFamilyName info 2023-07-12 10:58:51,074 INFO [RS:3;jenkins-hbase9:44255-shortCompactions-0] throttle.PressureAwareThroughputController(145): 0832c48321f808d3b4d6fb68605b1448#m#compaction#22 average throughput is 0.35 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-12 10:58:51,079 DEBUG [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(539): loaded hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/info/d15ac85af42147dfab1746d5f141cc5a 2023-07-12 10:58:51,079 INFO [StoreOpener-e5addb24bba6e8be9d4cddc12a45ff25-1] regionserver.HStore(310): Store=e5addb24bba6e8be9d4cddc12a45ff25/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:51,080 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:51,081 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:51,085 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:51,086 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e5addb24bba6e8be9d4cddc12a45ff25; next sequenceid=34; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10409091840, jitterRate=-0.030577778816223145}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:51,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:51,087 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25., pid=155, masterSystemTime=1689159530969 2023-07-12 10:58:51,089 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:51,089 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:51,092 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=e5addb24bba6e8be9d4cddc12a45ff25, regionState=OPEN, openSeqNum=34, regionLocation=jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:51,092 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159531092"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159531092"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159531092"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159531092"}]},"ts":"1689159531092"} 2023-07-12 10:58:51,096 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=147 2023-07-12 10:58:51,096 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=147, state=SUCCESS; OpenRegionProcedure e5addb24bba6e8be9d4cddc12a45ff25, server=jenkins-hbase9.apache.org,44255,1689159529453 in 275 msec 2023-07-12 10:58:51,098 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e5addb24bba6e8be9d4cddc12a45ff25, REOPEN/MOVE in 1.2530 sec 2023-07-12 10:58:51,102 DEBUG [RS:3;jenkins-hbase9:44255-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/8f2d65e9cc234130972182129ab4ecc3 as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/8f2d65e9cc234130972182129ab4ecc3 2023-07-12 10:58:51,108 INFO [RS:3;jenkins-hbase9:44255-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 0832c48321f808d3b4d6fb68605b1448/m of 0832c48321f808d3b4d6fb68605b1448 into 8f2d65e9cc234130972182129ab4ecc3(size=5.3 K), total size for store is 5.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-12 10:58:51,108 DEBUG [RS:3;jenkins-hbase9:44255-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:51,109 INFO [RS:3;jenkins-hbase9:44255-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448., storeName=0832c48321f808d3b4d6fb68605b1448/m, priority=13, startTime=1689159531040; duration=0sec 2023-07-12 10:58:51,109 DEBUG [RS:3;jenkins-hbase9:44255-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-12 10:58:51,129 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:51,129 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b71f0d9015a7b2292849acab5e81c0c6, NAME => 'hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:51,129 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:51,129 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:51,129 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:51,129 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:51,131 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:51,131 DEBUG [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/q 2023-07-12 10:58:51,132 DEBUG [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/q 2023-07-12 10:58:51,132 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b71f0d9015a7b2292849acab5e81c0c6 columnFamilyName q 2023-07-12 10:58:51,132 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] regionserver.HStore(310): Store=b71f0d9015a7b2292849acab5e81c0c6/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:51,133 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:51,133 DEBUG [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/u 2023-07-12 10:58:51,133 DEBUG [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/u 2023-07-12 10:58:51,134 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b71f0d9015a7b2292849acab5e81c0c6 columnFamilyName u 2023-07-12 10:58:51,134 INFO [StoreOpener-b71f0d9015a7b2292849acab5e81c0c6-1] regionserver.HStore(310): Store=b71f0d9015a7b2292849acab5e81c0c6/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:51,135 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:51,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:51,138 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-12 10:58:51,139 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:51,140 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened b71f0d9015a7b2292849acab5e81c0c6; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11183755200, jitterRate=0.041568368673324585}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-12 10:58:51,140 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for b71f0d9015a7b2292849acab5e81c0c6: 2023-07-12 10:58:51,141 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6., pid=156, masterSystemTime=1689159531125 2023-07-12 10:58:51,143 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:51,143 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:51,143 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=b71f0d9015a7b2292849acab5e81c0c6, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:51,144 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689159531143"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159531143"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159531143"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159531143"}]},"ts":"1689159531143"} 2023-07-12 10:58:51,148 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=146 2023-07-12 10:58:51,148 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=146, state=SUCCESS; OpenRegionProcedure b71f0d9015a7b2292849acab5e81c0c6, server=jenkins-hbase9.apache.org,44255,1689159529453 in 174 msec 2023-07-12 10:58:51,152 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=b71f0d9015a7b2292849acab5e81c0c6, REOPEN/MOVE in 1.3050 sec 2023-07-12 10:58:51,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,35789,1689159519370, jenkins-hbase9.apache.org,39005,1689159519576, jenkins-hbase9.apache.org,41263,1689159519156] are moved back to default 2023-07-12 10:58:51,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testClearDeadServers_942975590 2023-07-12 10:58:51,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:51,849 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35789] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Scan size: 136 connection: 172.31.2.10:47956 deadline: 1689159591849, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=44255 startCode=1689159529453. As of locationSeqNum=101. 2023-07-12 10:58:51,953 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41263] ipc.CallRunner(144): callId: 5 service: ClientService methodName: Get size: 88 connection: 172.31.2.10:56884 deadline: 1689159591953, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=44255 startCode=1689159529453. As of locationSeqNum=188. 2023-07-12 10:58:52,056 DEBUG [hconnection-0x3a0ec6e9-shared-pool-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:52,057 INFO [RS-EventLoopGroup-17-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:47316, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:52,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:52,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:52,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testClearDeadServers_942975590 2023-07-12 10:58:52,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:52,068 DEBUG [Listener at localhost/44831] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:52,069 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:42738, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:52,069 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35789] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,35789,1689159519370' ***** 2023-07-12 10:58:52,069 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35789] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0x67b7b866 2023-07-12 10:58:52,069 INFO [RS:1;jenkins-hbase9:35789] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:52,072 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:52,072 INFO [RS:1;jenkins-hbase9:35789] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5cbf1f2f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:52,073 INFO [RS:1;jenkins-hbase9:35789] server.AbstractConnector(383): Stopped ServerConnector@576ff101{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:52,073 INFO [RS:1;jenkins-hbase9:35789] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:52,074 INFO [RS:1;jenkins-hbase9:35789] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@38d6140c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:52,074 INFO [RS:1;jenkins-hbase9:35789] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7ebc5a8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:52,075 INFO [RS:1;jenkins-hbase9:35789] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:52,075 INFO [RS:1;jenkins-hbase9:35789] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:52,075 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:52,075 INFO [RS:1;jenkins-hbase9:35789] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:52,075 INFO [RS:1;jenkins-hbase9:35789] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:52,075 DEBUG [RS:1;jenkins-hbase9:35789] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1ca0ac65 to 127.0.0.1:49301 2023-07-12 10:58:52,075 DEBUG [RS:1;jenkins-hbase9:35789] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:52,075 INFO [RS:1;jenkins-hbase9:35789] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,35789,1689159519370; all regions closed. 2023-07-12 10:58:52,084 DEBUG [RS:1;jenkins-hbase9:35789] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:52,084 INFO [RS:1;jenkins-hbase9:35789] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C35789%2C1689159519370:(num 1689159520774) 2023-07-12 10:58:52,084 DEBUG [RS:1;jenkins-hbase9:35789] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:52,084 INFO [RS:1;jenkins-hbase9:35789] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:52,084 INFO [RS:1;jenkins-hbase9:35789] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:52,084 INFO [RS:1;jenkins-hbase9:35789] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:52,084 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:52,084 INFO [RS:1;jenkins-hbase9:35789] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:52,085 INFO [RS:1;jenkins-hbase9:35789] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:52,085 INFO [RS:1;jenkins-hbase9:35789] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:35789 2023-07-12 10:58:52,087 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:52,087 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:52,087 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,087 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,087 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:52,087 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,088 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,088 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 2023-07-12 10:58:52,088 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,089 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,35789,1689159519370] 2023-07-12 10:58:52,089 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:52,089 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:52,089 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,35789,1689159519370; numProcessing=1 2023-07-12 10:58:52,089 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:52,089 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:52,089 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:52,089 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:52,093 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,35789,1689159519370 already deleted, retry=false 2023-07-12 10:58:52,093 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,093 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,093 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 znode expired, triggering replicatorRemoved event 2023-07-12 10:58:52,093 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 znode expired, triggering replicatorRemoved event 2023-07-12 10:58:52,093 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase9.apache.org,35789,1689159519370 on jenkins-hbase9.apache.org,42625,1689159518976 2023-07-12 10:58:52,093 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,093 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase9.apache.org,35789,1689159519370 znode expired, triggering replicatorRemoved event 2023-07-12 10:58:52,094 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=157, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase9.apache.org,35789,1689159519370, splitWal=true, meta=false 2023-07-12 10:58:52,094 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:52,095 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=157 for jenkins-hbase9.apache.org,35789,1689159519370 (carryingMeta=false) jenkins-hbase9.apache.org,35789,1689159519370/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@2a15ed78[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-12 10:58:52,096 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:52,095 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:52,096 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:52,096 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:52,097 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,097 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:52,097 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:52,097 WARN [RS-EventLoopGroup-16-2] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase9.apache.org/172.31.2.10:35789 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:35789 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:58:52,097 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,099 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,099 DEBUG [RS-EventLoopGroup-16-2] ipc.FailedServers(52): Added failed server with address jenkins-hbase9.apache.org/172.31.2.10:35789 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase9.apache.org/172.31.2.10:35789 2023-07-12 10:58:52,100 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=157, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,35789,1689159519370, splitWal=true, meta=false 2023-07-12 10:58:52,101 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase9.apache.org,35789,1689159519370 had 0 regions 2023-07-12 10:58:52,103 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=157, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase9.apache.org,35789,1689159519370, splitWal=true, meta=false, isMeta: false 2023-07-12 10:58:52,104 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,35789,1689159519370-splitting 2023-07-12 10:58:52,104 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,35789,1689159519370-splitting dir is empty, no logs to split. 2023-07-12 10:58:52,105 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase9.apache.org,35789,1689159519370 WAL count=0, meta=false 2023-07-12 10:58:52,106 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,35789,1689159519370-splitting dir is empty, no logs to split. 2023-07-12 10:58:52,106 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase9.apache.org,35789,1689159519370 WAL count=0, meta=false 2023-07-12 10:58:52,106 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase9.apache.org,35789,1689159519370 WAL splitting is done? wals=0, meta=false 2023-07-12 10:58:52,108 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase9.apache.org,35789,1689159519370 failed, ignore...File hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,35789,1689159519370-splitting does not exist. 2023-07-12 10:58:52,109 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase9.apache.org,35789,1689159519370 after splitting done 2023-07-12 10:58:52,109 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase9.apache.org,35789,1689159519370 from processing; numProcessing=0 2023-07-12 10:58:52,110 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=157, state=SUCCESS; ServerCrashProcedure jenkins-hbase9.apache.org,35789,1689159519370, splitWal=true, meta=false in 16 msec 2023-07-12 10:58:52,160 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:52,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(2362): Client=jenkins//172.31.2.10 clear dead region servers. 2023-07-12 10:58:52,210 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:52,210 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_942975590 2023-07-12 10:58:52,210 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:52,210 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:52,212 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 10:58:52,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:52,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_942975590 2023-07-12 10:58:52,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:52,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:52,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(609): Remove decommissioned servers [jenkins-hbase9.apache.org:35789] from RSGroup done 2023-07-12 10:58:52,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testClearDeadServers_942975590 2023-07-12 10:58:52,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:52,222 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41263] ipc.CallRunner(144): callId: 84 service: ClientService methodName: Scan size: 146 connection: 172.31.2.10:38356 deadline: 1689159592222, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=44255 startCode=1689159529453. As of locationSeqNum=31. 2023-07-12 10:58:52,279 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:52,279 INFO [RS:1;jenkins-hbase9:35789] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,35789,1689159519370; zookeeper connection closed. 2023-07-12 10:58:52,279 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:35789-0x1015920fb08001e, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:52,279 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@717f4710] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@717f4710 2023-07-12 10:58:52,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:52,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:52,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:52,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:52,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:52,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:39005, jenkins-hbase9.apache.org:41263] to rsgroup default 2023-07-12 10:58:52,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:52,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_942975590 2023-07-12 10:58:52,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:52,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:52,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testClearDeadServers_942975590, current retry=0 2023-07-12 10:58:52,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,39005,1689159519576, jenkins-hbase9.apache.org,41263,1689159519156] are moved back to Group_testClearDeadServers_942975590 2023-07-12 10:58:52,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testClearDeadServers_942975590 => default 2023-07-12 10:58:52,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:52,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup Group_testClearDeadServers_942975590 2023-07-12 10:58:52,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:52,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:52,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 10:58:52,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:52,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:52,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:52,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:52,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:52,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:52,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:52,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:52,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:52,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:52,351 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 10:58:52,364 INFO [Listener at localhost/44831] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:52,364 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:52,364 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:52,365 INFO [Listener at localhost/44831] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:52,365 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:52,365 INFO [Listener at localhost/44831] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:52,365 INFO [Listener at localhost/44831] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:52,368 INFO [Listener at localhost/44831] ipc.NettyRpcServer(120): Bind to /172.31.2.10:35777 2023-07-12 10:58:52,369 INFO [Listener at localhost/44831] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:52,371 DEBUG [Listener at localhost/44831] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:52,371 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:52,372 INFO [Listener at localhost/44831] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:52,373 INFO [Listener at localhost/44831] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35777 connecting to ZooKeeper ensemble=127.0.0.1:49301 2023-07-12 10:58:52,376 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:357770x0, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:52,377 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(162): regionserver:357770x0, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:52,378 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35777-0x1015920fb08002a connected 2023-07-12 10:58:52,378 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(162): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 10:58:52,379 DEBUG [Listener at localhost/44831] zookeeper.ZKUtil(164): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:52,379 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35777 2023-07-12 10:58:52,379 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35777 2023-07-12 10:58:52,380 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35777 2023-07-12 10:58:52,381 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35777 2023-07-12 10:58:52,381 DEBUG [Listener at localhost/44831] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35777 2023-07-12 10:58:52,383 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:52,383 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:52,383 INFO [Listener at localhost/44831] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:52,383 INFO [Listener at localhost/44831] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:52,383 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:52,384 INFO [Listener at localhost/44831] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:52,384 INFO [Listener at localhost/44831] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:52,384 INFO [Listener at localhost/44831] http.HttpServer(1146): Jetty bound to port 34937 2023-07-12 10:58:52,384 INFO [Listener at localhost/44831] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:52,386 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:52,386 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@33956253{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:52,387 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:52,387 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@691c90cf{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:52,502 INFO [Listener at localhost/44831] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:52,504 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:52,504 INFO [Listener at localhost/44831] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:52,505 INFO [Listener at localhost/44831] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:58:52,505 INFO [Listener at localhost/44831] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:52,506 INFO [Listener at localhost/44831] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3f156638{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/java.io.tmpdir/jetty-0_0_0_0-34937-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2297502316091014545/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:52,508 INFO [Listener at localhost/44831] server.AbstractConnector(333): Started ServerConnector@4b16a276{HTTP/1.1, (http/1.1)}{0.0.0.0:34937} 2023-07-12 10:58:52,508 INFO [Listener at localhost/44831] server.Server(415): Started @56201ms 2023-07-12 10:58:52,512 INFO [RS:4;jenkins-hbase9:35777] regionserver.HRegionServer(951): ClusterId : 2ee0ec36-84f9-4576-888d-f37f0b52beaa 2023-07-12 10:58:52,512 DEBUG [RS:4;jenkins-hbase9:35777] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:52,515 DEBUG [RS:4;jenkins-hbase9:35777] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:52,515 DEBUG [RS:4;jenkins-hbase9:35777] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:52,517 DEBUG [RS:4;jenkins-hbase9:35777] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:52,517 DEBUG [RS:4;jenkins-hbase9:35777] zookeeper.ReadOnlyZKClient(139): Connect 0x3fbc4c03 to 127.0.0.1:49301 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:52,521 DEBUG [RS:4;jenkins-hbase9:35777] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@317bb584, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:52,521 DEBUG [RS:4;jenkins-hbase9:35777] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@35442773, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:52,530 DEBUG [RS:4;jenkins-hbase9:35777] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase9:35777 2023-07-12 10:58:52,530 INFO [RS:4;jenkins-hbase9:35777] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:52,530 INFO [RS:4;jenkins-hbase9:35777] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:52,530 DEBUG [RS:4;jenkins-hbase9:35777] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:52,530 INFO [RS:4;jenkins-hbase9:35777] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,42625,1689159518976 with isa=jenkins-hbase9.apache.org/172.31.2.10:35777, startcode=1689159532363 2023-07-12 10:58:52,530 DEBUG [RS:4;jenkins-hbase9:35777] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:52,532 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:37063, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.12 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:52,532 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42625] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,532 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:52,532 DEBUG [RS:4;jenkins-hbase9:35777] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5 2023-07-12 10:58:52,532 DEBUG [RS:4;jenkins-hbase9:35777] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42757 2023-07-12 10:58:52,533 DEBUG [RS:4;jenkins-hbase9:35777] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41045 2023-07-12 10:58:52,534 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,534 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,534 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,534 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,534 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:52,534 DEBUG [RS:4;jenkins-hbase9:35777] zookeeper.ZKUtil(162): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,534 WARN [RS:4;jenkins-hbase9:35777] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:52,535 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,35777,1689159532363] 2023-07-12 10:58:52,535 INFO [RS:4;jenkins-hbase9:35777] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:52,535 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:52,535 DEBUG [RS:4;jenkins-hbase9:35777] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,535 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 10:58:52,535 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:52,535 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:52,535 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:52,538 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:52,538 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,42625,1689159518976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 10:58:52,538 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:52,538 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,539 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,539 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,539 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,539 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,539 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,539 DEBUG [RS:4;jenkins-hbase9:35777] zookeeper.ZKUtil(162): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:52,539 DEBUG [RS:4;jenkins-hbase9:35777] zookeeper.ZKUtil(162): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:52,540 DEBUG [RS:4;jenkins-hbase9:35777] zookeeper.ZKUtil(162): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,540 DEBUG [RS:4;jenkins-hbase9:35777] zookeeper.ZKUtil(162): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,541 DEBUG [RS:4;jenkins-hbase9:35777] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:52,541 INFO [RS:4;jenkins-hbase9:35777] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:52,548 INFO [RS:4;jenkins-hbase9:35777] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:52,548 INFO [RS:4;jenkins-hbase9:35777] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:52,548 INFO [RS:4;jenkins-hbase9:35777] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:52,548 INFO [RS:4;jenkins-hbase9:35777] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:52,550 INFO [RS:4;jenkins-hbase9:35777] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:52,551 DEBUG [RS:4;jenkins-hbase9:35777] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:52,551 DEBUG [RS:4;jenkins-hbase9:35777] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:52,551 DEBUG [RS:4;jenkins-hbase9:35777] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:52,551 DEBUG [RS:4;jenkins-hbase9:35777] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:52,551 DEBUG [RS:4;jenkins-hbase9:35777] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:52,551 DEBUG [RS:4;jenkins-hbase9:35777] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:52,551 DEBUG [RS:4;jenkins-hbase9:35777] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:52,551 DEBUG [RS:4;jenkins-hbase9:35777] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:52,551 DEBUG [RS:4;jenkins-hbase9:35777] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:52,551 DEBUG [RS:4;jenkins-hbase9:35777] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:52,552 INFO [RS:4;jenkins-hbase9:35777] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:52,552 INFO [RS:4;jenkins-hbase9:35777] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:52,553 INFO [RS:4;jenkins-hbase9:35777] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:52,569 INFO [RS:4;jenkins-hbase9:35777] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:52,569 INFO [RS:4;jenkins-hbase9:35777] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,35777,1689159532363-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:52,583 INFO [RS:4;jenkins-hbase9:35777] regionserver.Replication(203): jenkins-hbase9.apache.org,35777,1689159532363 started 2023-07-12 10:58:52,583 INFO [RS:4;jenkins-hbase9:35777] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,35777,1689159532363, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:35777, sessionid=0x1015920fb08002a 2023-07-12 10:58:52,583 DEBUG [RS:4;jenkins-hbase9:35777] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:52,583 DEBUG [RS:4;jenkins-hbase9:35777] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,583 DEBUG [RS:4;jenkins-hbase9:35777] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,35777,1689159532363' 2023-07-12 10:58:52,583 DEBUG [RS:4;jenkins-hbase9:35777] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:52,584 DEBUG [RS:4;jenkins-hbase9:35777] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:52,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:52,584 DEBUG [RS:4;jenkins-hbase9:35777] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:52,584 DEBUG [RS:4;jenkins-hbase9:35777] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:52,584 DEBUG [RS:4;jenkins-hbase9:35777] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,584 DEBUG [RS:4;jenkins-hbase9:35777] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,35777,1689159532363' 2023-07-12 10:58:52,585 DEBUG [RS:4;jenkins-hbase9:35777] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:52,585 DEBUG [RS:4;jenkins-hbase9:35777] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:52,585 DEBUG [RS:4;jenkins-hbase9:35777] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:52,585 INFO [RS:4;jenkins-hbase9:35777] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:52,585 INFO [RS:4;jenkins-hbase9:35777] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:52,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:52,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:52,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:52,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:52,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:52,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:52,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:42625] to rsgroup master 2023-07-12 10:58:52,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:42625 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:52,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] ipc.CallRunner(144): callId: 104 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:60770 deadline: 1689160732594, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:42625 is either offline or it does not exist. 2023-07-12 10:58:52,595 WARN [Listener at localhost/44831] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:42625 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor63.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:42625 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:52,599 INFO [Listener at localhost/44831] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:52,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:52,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:52,600 INFO [Listener at localhost/44831] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:35777, jenkins-hbase9.apache.org:39005, jenkins-hbase9.apache.org:41263, jenkins-hbase9.apache.org:44255], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:52,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:52,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42625] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:52,625 INFO [Listener at localhost/44831] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=589 (was 555) - Thread LEAK? -, OpenFileDescriptor=920 (was 869) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=270 (was 270), ProcessCount=170 (was 170), AvailableMemoryMB=7974 (was 8021) 2023-07-12 10:58:52,625 WARN [Listener at localhost/44831] hbase.ResourceChecker(130): Thread=589 is superior to 500 2023-07-12 10:58:52,625 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 10:58:52,626 INFO [Listener at localhost/44831] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 10:58:52,626 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x33f2d925 to 127.0.0.1:49301 2023-07-12 10:58:52,626 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:52,626 DEBUG [Listener at localhost/44831] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 10:58:52,626 DEBUG [Listener at localhost/44831] util.JVMClusterUtil(257): Found active master hash=1586401660, stopped=false 2023-07-12 10:58:52,626 DEBUG [Listener at localhost/44831] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 10:58:52,626 DEBUG [Listener at localhost/44831] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 10:58:52,627 INFO [Listener at localhost/44831] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase9.apache.org,42625,1689159518976 2023-07-12 10:58:52,629 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:52,629 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:52,630 INFO [Listener at localhost/44831] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 10:58:52,629 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:52,630 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:52,630 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:52,630 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:52,630 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:52,630 DEBUG [Listener at localhost/44831] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4318667d to 127.0.0.1:49301 2023-07-12 10:58:52,637 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:52,637 DEBUG [Listener at localhost/44831] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:52,637 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:52,638 INFO [Listener at localhost/44831] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,41263,1689159519156' ***** 2023-07-12 10:58:52,638 INFO [Listener at localhost/44831] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:52,638 INFO [Listener at localhost/44831] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,39005,1689159519576' ***** 2023-07-12 10:58:52,638 INFO [Listener at localhost/44831] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:52,638 INFO [RS:2;jenkins-hbase9:39005] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:52,638 INFO [Listener at localhost/44831] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,44255,1689159529453' ***** 2023-07-12 10:58:52,639 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:52,639 INFO [RS:0;jenkins-hbase9:41263] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:52,639 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:52,640 INFO [Listener at localhost/44831] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:52,641 INFO [Listener at localhost/44831] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,35777,1689159532363' ***** 2023-07-12 10:58:52,642 INFO [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:52,642 INFO [Listener at localhost/44831] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:52,643 INFO [RS:2;jenkins-hbase9:39005] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1dc12c2b{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:52,644 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 10:58:52,645 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 10:58:52,645 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 10:58:52,645 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 10:58:52,645 INFO [RS:3;jenkins-hbase9:44255] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4c297362{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:52,645 INFO [RS:4;jenkins-hbase9:35777] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:52,645 INFO [RS:3;jenkins-hbase9:44255] server.AbstractConnector(383): Stopped ServerConnector@2ad1f356{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:52,646 INFO [RS:3;jenkins-hbase9:44255] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:52,646 INFO [RS:0;jenkins-hbase9:41263] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@54c460b4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:52,647 INFO [RS:3;jenkins-hbase9:44255] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@74dcb463{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:52,647 INFO [RS:2;jenkins-hbase9:39005] server.AbstractConnector(383): Stopped ServerConnector@e64859e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:52,647 INFO [RS:2;jenkins-hbase9:39005] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:52,647 INFO [RS:3;jenkins-hbase9:44255] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@798dfac2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:52,649 INFO [RS:2;jenkins-hbase9:39005] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@76a41604{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:52,648 INFO [RS:0;jenkins-hbase9:41263] server.AbstractConnector(383): Stopped ServerConnector@10f53854{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:52,650 INFO [RS:2;jenkins-hbase9:39005] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41c22c3f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:52,650 INFO [RS:0;jenkins-hbase9:41263] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:52,650 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:52,651 INFO [RS:0;jenkins-hbase9:41263] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@ed0b40b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:52,652 INFO [RS:0;jenkins-hbase9:41263] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@478af4cc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:52,652 INFO [RS:3;jenkins-hbase9:44255] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:52,652 INFO [RS:3;jenkins-hbase9:44255] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:52,652 INFO [RS:3;jenkins-hbase9:44255] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:52,652 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:52,652 INFO [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(3305): Received CLOSE for 0832c48321f808d3b4d6fb68605b1448 2023-07-12 10:58:52,652 INFO [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(3305): Received CLOSE for e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:52,653 INFO [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(3305): Received CLOSE for b71f0d9015a7b2292849acab5e81c0c6 2023-07-12 10:58:52,653 INFO [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:52,653 DEBUG [RS:3;jenkins-hbase9:44255] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6f290853 to 127.0.0.1:49301 2023-07-12 10:58:52,653 DEBUG [RS:3;jenkins-hbase9:44255] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:52,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 0832c48321f808d3b4d6fb68605b1448, disabling compactions & flushes 2023-07-12 10:58:52,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:52,653 INFO [RS:3;jenkins-hbase9:44255] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:52,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:52,653 INFO [RS:3;jenkins-hbase9:44255] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:52,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. after waiting 0 ms 2023-07-12 10:58:52,653 INFO [RS:3;jenkins-hbase9:44255] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:52,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:52,653 INFO [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 10:58:52,653 INFO [RS:4;jenkins-hbase9:35777] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3f156638{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:52,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 0832c48321f808d3b4d6fb68605b1448 1/1 column families, dataSize=2.04 KB heapSize=3.55 KB 2023-07-12 10:58:52,653 INFO [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-12 10:58:52,653 DEBUG [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1478): Online Regions={0832c48321f808d3b4d6fb68605b1448=hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448., e5addb24bba6e8be9d4cddc12a45ff25=hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25., 1588230740=hbase:meta,,1.1588230740, b71f0d9015a7b2292849acab5e81c0c6=hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6.} 2023-07-12 10:58:52,654 DEBUG [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1504): Waiting on 0832c48321f808d3b4d6fb68605b1448, 1588230740, b71f0d9015a7b2292849acab5e81c0c6, e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:52,654 INFO [RS:4;jenkins-hbase9:35777] server.AbstractConnector(383): Stopped ServerConnector@4b16a276{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:52,655 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:52,655 INFO [RS:4;jenkins-hbase9:35777] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:52,655 INFO [RS:0;jenkins-hbase9:41263] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:52,656 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:52,655 INFO [RS:2;jenkins-hbase9:39005] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:52,654 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 10:58:52,657 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:52,657 INFO [RS:2;jenkins-hbase9:39005] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:52,657 INFO [RS:2;jenkins-hbase9:39005] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:52,657 INFO [RS:2;jenkins-hbase9:39005] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,657 DEBUG [RS:2;jenkins-hbase9:39005] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x060d7140 to 127.0.0.1:49301 2023-07-12 10:58:52,657 DEBUG [RS:2;jenkins-hbase9:39005] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:52,657 INFO [RS:2;jenkins-hbase9:39005] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,39005,1689159519576; all regions closed. 2023-07-12 10:58:52,656 INFO [RS:0;jenkins-hbase9:41263] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:52,656 INFO [RS:4;jenkins-hbase9:35777] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@691c90cf{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:52,657 INFO [RS:0;jenkins-hbase9:41263] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:52,658 INFO [RS:0;jenkins-hbase9:41263] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:52,658 DEBUG [RS:0;jenkins-hbase9:41263] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x44efe797 to 127.0.0.1:49301 2023-07-12 10:58:52,658 DEBUG [RS:0;jenkins-hbase9:41263] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:52,657 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 10:58:52,658 INFO [RS:0;jenkins-hbase9:41263] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,41263,1689159519156; all regions closed. 2023-07-12 10:58:52,658 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 10:58:52,659 INFO [RS:4;jenkins-hbase9:35777] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@33956253{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:52,659 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 10:58:52,659 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 10:58:52,659 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.43 KB heapSize=6.39 KB 2023-07-12 10:58:52,660 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:52,660 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:52,660 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:52,662 INFO [RS:4;jenkins-hbase9:35777] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:52,662 INFO [RS:4;jenkins-hbase9:35777] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:52,662 INFO [RS:4;jenkins-hbase9:35777] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:52,663 INFO [RS:4;jenkins-hbase9:35777] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,663 DEBUG [RS:4;jenkins-hbase9:35777] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3fbc4c03 to 127.0.0.1:49301 2023-07-12 10:58:52,664 DEBUG [RS:4;jenkins-hbase9:35777] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:52,664 INFO [RS:4;jenkins-hbase9:35777] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,35777,1689159532363; all regions closed. 2023-07-12 10:58:52,664 DEBUG [RS:4;jenkins-hbase9:35777] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:52,664 INFO [RS:4;jenkins-hbase9:35777] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:52,664 INFO [RS:4;jenkins-hbase9:35777] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:52,664 INFO [RS:4;jenkins-hbase9:35777] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:52,664 INFO [RS:4;jenkins-hbase9:35777] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:52,664 INFO [RS:4;jenkins-hbase9:35777] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:52,664 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:52,668 INFO [RS:4;jenkins-hbase9:35777] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:35777 2023-07-12 10:58:52,674 DEBUG [RS:0;jenkins-hbase9:41263] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:52,674 INFO [RS:0;jenkins-hbase9:41263] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C41263%2C1689159519156.meta:.meta(num 1689159520864) 2023-07-12 10:58:52,675 DEBUG [RS:2;jenkins-hbase9:39005] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:52,675 INFO [RS:2;jenkins-hbase9:39005] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C39005%2C1689159519576:(num 1689159520772) 2023-07-12 10:58:52,675 DEBUG [RS:2;jenkins-hbase9:39005] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:52,675 INFO [RS:2;jenkins-hbase9:39005] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:52,680 INFO [RS:2;jenkins-hbase9:39005] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:52,681 INFO [RS:2;jenkins-hbase9:39005] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:52,681 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:52,681 INFO [RS:2;jenkins-hbase9:39005] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:52,681 INFO [RS:2;jenkins-hbase9:39005] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:52,689 INFO [RS:2;jenkins-hbase9:39005] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:39005 2023-07-12 10:58:52,690 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/WALs/jenkins-hbase9.apache.org,41263,1689159519156/jenkins-hbase9.apache.org%2C41263%2C1689159519156.1689159520770 not finished, retry = 0 2023-07-12 10:58:52,701 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.43 KB at sequenceid=203 (bloomFilter=false), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/info/4a13432187e54cc4816ee7d8007d3b8a 2023-07-12 10:58:52,702 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 10:58:52,702 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 10:58:52,707 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/.tmp/info/4a13432187e54cc4816ee7d8007d3b8a as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/4a13432187e54cc4816ee7d8007d3b8a 2023-07-12 10:58:52,712 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/info/4a13432187e54cc4816ee7d8007d3b8a, entries=30, sequenceid=203, filesize=8.2 K 2023-07-12 10:58:52,713 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.43 KB/3510, heapSize ~5.88 KB/6016, currentSize=0 B/0 for 1588230740 in 54ms, sequenceid=203, compaction requested=true 2023-07-12 10:58:52,713 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 10:58:52,747 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/meta/1588230740/recovered.edits/206.seqid, newMaxSeqId=206, maxSeqId=191 2023-07-12 10:58:52,747 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:52,748 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:52,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 10:58:52,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:52,770 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,770 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,770 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,770 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,770 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39005,1689159519576 2023-07-12 10:58:52,770 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,770 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,770 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,770 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,770 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,770 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,770 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,770 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,35777,1689159532363 2023-07-12 10:58:52,771 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,35777,1689159532363] 2023-07-12 10:58:52,771 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,35777,1689159532363; numProcessing=1 2023-07-12 10:58:52,773 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,35777,1689159532363 already deleted, retry=false 2023-07-12 10:58:52,773 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,35777,1689159532363 expired; onlineServers=3 2023-07-12 10:58:52,773 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,39005,1689159519576] 2023-07-12 10:58:52,773 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,39005,1689159519576; numProcessing=2 2023-07-12 10:58:52,774 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,39005,1689159519576 already deleted, retry=false 2023-07-12 10:58:52,774 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,39005,1689159519576 expired; onlineServers=2 2023-07-12 10:58:52,792 DEBUG [RS:0;jenkins-hbase9:41263] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:52,792 INFO [RS:0;jenkins-hbase9:41263] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C41263%2C1689159519156:(num 1689159520770) 2023-07-12 10:58:52,792 DEBUG [RS:0;jenkins-hbase9:41263] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:52,792 INFO [RS:0;jenkins-hbase9:41263] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:52,793 INFO [RS:0;jenkins-hbase9:41263] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:52,793 INFO [RS:0;jenkins-hbase9:41263] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:52,793 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:52,793 INFO [RS:0;jenkins-hbase9:41263] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:52,793 INFO [RS:0;jenkins-hbase9:41263] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:52,794 INFO [RS:0;jenkins-hbase9:41263] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:41263 2023-07-12 10:58:52,804 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:52,804 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:52,804 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,41263,1689159519156 2023-07-12 10:58:52,805 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,41263,1689159519156] 2023-07-12 10:58:52,805 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,41263,1689159519156; numProcessing=3 2023-07-12 10:58:52,807 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,41263,1689159519156 already deleted, retry=false 2023-07-12 10:58:52,808 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,41263,1689159519156 expired; onlineServers=1 2023-07-12 10:58:52,854 DEBUG [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1504): Waiting on 0832c48321f808d3b4d6fb68605b1448, b71f0d9015a7b2292849acab5e81c0c6, e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:53,054 DEBUG [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1504): Waiting on 0832c48321f808d3b4d6fb68605b1448, b71f0d9015a7b2292849acab5e81c0c6, e5addb24bba6e8be9d4cddc12a45ff25 2023-07-12 10:58:53,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.04 KB at sequenceid=115 (bloomFilter=true), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/9f26d17148d94b9ab32be33e7505ce3b 2023-07-12 10:58:53,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9f26d17148d94b9ab32be33e7505ce3b 2023-07-12 10:58:53,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/.tmp/m/9f26d17148d94b9ab32be33e7505ce3b as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/9f26d17148d94b9ab32be33e7505ce3b 2023-07-12 10:58:53,103 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9f26d17148d94b9ab32be33e7505ce3b 2023-07-12 10:58:53,103 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/9f26d17148d94b9ab32be33e7505ce3b, entries=4, sequenceid=115, filesize=5.3 K 2023-07-12 10:58:53,103 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.04 KB/2093, heapSize ~3.53 KB/3616, currentSize=0 B/0 for 0832c48321f808d3b4d6fb68605b1448 in 450ms, sequenceid=115, compaction requested=false 2023-07-12 10:58:53,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 10:58:53,108 DEBUG [StoreCloser-hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/bffd8719b2904374a71e69e548411438, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/ff557c1700754976a0534ed7a4fce455, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/73dee99a570c4214990ad5de3fad4284] to archive 2023-07-12 10:58:53,109 DEBUG [StoreCloser-hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-12 10:58:53,111 DEBUG [StoreCloser-hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/bffd8719b2904374a71e69e548411438 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/bffd8719b2904374a71e69e548411438 2023-07-12 10:58:53,112 DEBUG [StoreCloser-hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/ff557c1700754976a0534ed7a4fce455 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/ff557c1700754976a0534ed7a4fce455 2023-07-12 10:58:53,114 DEBUG [StoreCloser-hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/73dee99a570c4214990ad5de3fad4284 to hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/archive/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/m/73dee99a570c4214990ad5de3fad4284 2023-07-12 10:58:53,117 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/rsgroup/0832c48321f808d3b4d6fb68605b1448/recovered.edits/118.seqid, newMaxSeqId=118, maxSeqId=104 2023-07-12 10:58:53,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:53,118 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:53,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 0832c48321f808d3b4d6fb68605b1448: 2023-07-12 10:58:53,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689159487528.0832c48321f808d3b4d6fb68605b1448. 2023-07-12 10:58:53,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e5addb24bba6e8be9d4cddc12a45ff25, disabling compactions & flushes 2023-07-12 10:58:53,119 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:53,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:53,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. after waiting 0 ms 2023-07-12 10:58:53,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:53,122 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/namespace/e5addb24bba6e8be9d4cddc12a45ff25/recovered.edits/36.seqid, newMaxSeqId=36, maxSeqId=33 2023-07-12 10:58:53,122 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:53,122 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e5addb24bba6e8be9d4cddc12a45ff25: 2023-07-12 10:58:53,122 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689159487410.e5addb24bba6e8be9d4cddc12a45ff25. 2023-07-12 10:58:53,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing b71f0d9015a7b2292849acab5e81c0c6, disabling compactions & flushes 2023-07-12 10:58:53,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:53,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:53,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. after waiting 0 ms 2023-07-12 10:58:53,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:53,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/data/hbase/quota/b71f0d9015a7b2292849acab5e81c0c6/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 10:58:53,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:53,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for b71f0d9015a7b2292849acab5e81c0c6: 2023-07-12 10:58:53,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689159513001.b71f0d9015a7b2292849acab5e81c0c6. 2023-07-12 10:58:53,230 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:53,230 INFO [RS:0;jenkins-hbase9:41263] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,41263,1689159519156; zookeeper connection closed. 2023-07-12 10:58:53,231 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:41263-0x1015920fb08001d, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:53,231 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3304dcb7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3304dcb7 2023-07-12 10:58:53,254 INFO [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,44255,1689159529453; all regions closed. 2023-07-12 10:58:53,260 DEBUG [RS:3;jenkins-hbase9:44255] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:53,260 INFO [RS:3;jenkins-hbase9:44255] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C44255%2C1689159529453.meta:.meta(num 1689159530496) 2023-07-12 10:58:53,265 DEBUG [RS:3;jenkins-hbase9:44255] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/oldWALs 2023-07-12 10:58:53,265 INFO [RS:3;jenkins-hbase9:44255] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C44255%2C1689159529453:(num 1689159529772) 2023-07-12 10:58:53,265 DEBUG [RS:3;jenkins-hbase9:44255] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:53,265 INFO [RS:3;jenkins-hbase9:44255] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:53,265 INFO [RS:3;jenkins-hbase9:44255] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:53,265 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:53,266 INFO [RS:3;jenkins-hbase9:44255] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:44255 2023-07-12 10:58:53,268 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,44255,1689159529453 2023-07-12 10:58:53,268 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:53,269 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,44255,1689159529453] 2023-07-12 10:58:53,269 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,44255,1689159529453; numProcessing=4 2023-07-12 10:58:53,271 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,44255,1689159529453 already deleted, retry=false 2023-07-12 10:58:53,271 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,44255,1689159529453 expired; onlineServers=0 2023-07-12 10:58:53,271 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,42625,1689159518976' ***** 2023-07-12 10:58:53,271 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 10:58:53,272 DEBUG [M:0;jenkins-hbase9:42625] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ab10e75, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:53,272 INFO [M:0;jenkins-hbase9:42625] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:53,274 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:53,274 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:53,274 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:53,275 INFO [M:0;jenkins-hbase9:42625] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@c48016{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 10:58:53,275 INFO [M:0;jenkins-hbase9:42625] server.AbstractConnector(383): Stopped ServerConnector@1b8ca572{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:53,275 INFO [M:0;jenkins-hbase9:42625] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:53,276 INFO [M:0;jenkins-hbase9:42625] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5d5ab32a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:53,276 INFO [M:0;jenkins-hbase9:42625] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@468d9d18{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:53,277 INFO [M:0;jenkins-hbase9:42625] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,42625,1689159518976 2023-07-12 10:58:53,277 INFO [M:0;jenkins-hbase9:42625] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,42625,1689159518976; all regions closed. 2023-07-12 10:58:53,277 DEBUG [M:0;jenkins-hbase9:42625] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:53,277 INFO [M:0;jenkins-hbase9:42625] master.HMaster(1491): Stopping master jetty server 2023-07-12 10:58:53,278 INFO [M:0;jenkins-hbase9:42625] server.AbstractConnector(383): Stopped ServerConnector@5293f66e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:53,278 DEBUG [M:0;jenkins-hbase9:42625] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 10:58:53,278 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 10:58:53,278 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159520526] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159520526,5,FailOnTimeoutGroup] 2023-07-12 10:58:53,278 DEBUG [M:0;jenkins-hbase9:42625] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 10:58:53,278 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159520515] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159520515,5,FailOnTimeoutGroup] 2023-07-12 10:58:53,279 INFO [M:0;jenkins-hbase9:42625] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 10:58:53,279 INFO [M:0;jenkins-hbase9:42625] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 10:58:53,279 INFO [M:0;jenkins-hbase9:42625] hbase.ChoreService(369): Chore service for: master/jenkins-hbase9:0 had [] on shutdown 2023-07-12 10:58:53,279 DEBUG [M:0;jenkins-hbase9:42625] master.HMaster(1512): Stopping service threads 2023-07-12 10:58:53,279 INFO [M:0;jenkins-hbase9:42625] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 10:58:53,279 ERROR [M:0;jenkins-hbase9:42625] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 10:58:53,279 INFO [M:0;jenkins-hbase9:42625] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 10:58:53,279 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 10:58:53,279 DEBUG [M:0;jenkins-hbase9:42625] zookeeper.ZKUtil(398): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 10:58:53,280 WARN [M:0;jenkins-hbase9:42625] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 10:58:53,280 INFO [M:0;jenkins-hbase9:42625] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 10:58:53,280 INFO [M:0;jenkins-hbase9:42625] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 10:58:53,280 DEBUG [M:0;jenkins-hbase9:42625] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 10:58:53,280 INFO [M:0;jenkins-hbase9:42625] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:53,280 DEBUG [M:0;jenkins-hbase9:42625] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:53,280 DEBUG [M:0;jenkins-hbase9:42625] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 10:58:53,280 DEBUG [M:0;jenkins-hbase9:42625] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:53,280 INFO [M:0;jenkins-hbase9:42625] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=73.82 KB heapSize=90.70 KB 2023-07-12 10:58:53,293 INFO [M:0;jenkins-hbase9:42625] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=73.82 KB at sequenceid=1203 (bloomFilter=true), to=hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1048619612ac40e5b305253c9d287beb 2023-07-12 10:58:53,299 DEBUG [M:0;jenkins-hbase9:42625] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1048619612ac40e5b305253c9d287beb as hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1048619612ac40e5b305253c9d287beb 2023-07-12 10:58:53,303 INFO [M:0;jenkins-hbase9:42625] regionserver.HStore(1080): Added hdfs://localhost:42757/user/jenkins/test-data/1fc8a9c5-570f-c2ae-2a06-7765a8ff7de5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1048619612ac40e5b305253c9d287beb, entries=24, sequenceid=1203, filesize=8.3 K 2023-07-12 10:58:53,304 INFO [M:0;jenkins-hbase9:42625] regionserver.HRegion(2948): Finished flush of dataSize ~73.82 KB/75595, heapSize ~90.68 KB/92856, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=1203, compaction requested=true 2023-07-12 10:58:53,306 INFO [M:0;jenkins-hbase9:42625] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:53,306 DEBUG [M:0;jenkins-hbase9:42625] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:53,310 INFO [M:0;jenkins-hbase9:42625] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 10:58:53,310 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:53,311 INFO [M:0;jenkins-hbase9:42625] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:42625 2023-07-12 10:58:53,312 DEBUG [M:0;jenkins-hbase9:42625] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase9.apache.org,42625,1689159518976 already deleted, retry=false 2023-07-12 10:58:53,331 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:53,331 INFO [RS:4;jenkins-hbase9:35777] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,35777,1689159532363; zookeeper connection closed. 2023-07-12 10:58:53,331 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:35777-0x1015920fb08002a, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:53,331 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3b01590c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3b01590c 2023-07-12 10:58:53,431 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:53,431 INFO [RS:2;jenkins-hbase9:39005] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,39005,1689159519576; zookeeper connection closed. 2023-07-12 10:58:53,431 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:39005-0x1015920fb08001f, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:53,431 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@416906d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@416906d 2023-07-12 10:58:53,531 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:53,531 INFO [M:0;jenkins-hbase9:42625] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,42625,1689159518976; zookeeper connection closed. 2023-07-12 10:58:53,531 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): master:42625-0x1015920fb08001c, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:53,631 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:53,631 INFO [RS:3;jenkins-hbase9:44255] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,44255,1689159529453; zookeeper connection closed. 2023-07-12 10:58:53,631 DEBUG [Listener at localhost/44831-EventThread] zookeeper.ZKWatcher(600): regionserver:44255-0x1015920fb080028, quorum=127.0.0.1:49301, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:53,632 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@25819e25] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@25819e25 2023-07-12 10:58:53,632 INFO [Listener at localhost/44831] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-12 10:58:53,632 WARN [Listener at localhost/44831] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 10:58:53,638 INFO [Listener at localhost/44831] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:53,645 WARN [BP-1946597163-172.31.2.10-1689159478370 heartbeating to localhost/127.0.0.1:42757] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 10:58:53,646 WARN [BP-1946597163-172.31.2.10-1689159478370 heartbeating to localhost/127.0.0.1:42757] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1946597163-172.31.2.10-1689159478370 (Datanode Uuid 0b8e6506-b2d3-4f82-af25-88926a9a69f5) service to localhost/127.0.0.1:42757 2023-07-12 10:58:53,648 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/dfs/data/data5/current/BP-1946597163-172.31.2.10-1689159478370] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:53,648 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/dfs/data/data6/current/BP-1946597163-172.31.2.10-1689159478370] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:53,650 WARN [Listener at localhost/44831] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 10:58:53,668 INFO [Listener at localhost/44831] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:53,774 WARN [BP-1946597163-172.31.2.10-1689159478370 heartbeating to localhost/127.0.0.1:42757] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 10:58:53,774 WARN [BP-1946597163-172.31.2.10-1689159478370 heartbeating to localhost/127.0.0.1:42757] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1946597163-172.31.2.10-1689159478370 (Datanode Uuid 8b34f536-f3b6-4e8c-8608-ad66bf3bae1d) service to localhost/127.0.0.1:42757 2023-07-12 10:58:53,775 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/dfs/data/data3/current/BP-1946597163-172.31.2.10-1689159478370] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:53,775 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/dfs/data/data4/current/BP-1946597163-172.31.2.10-1689159478370] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:53,777 WARN [Listener at localhost/44831] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 10:58:53,781 INFO [Listener at localhost/44831] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:53,886 WARN [BP-1946597163-172.31.2.10-1689159478370 heartbeating to localhost/127.0.0.1:42757] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 10:58:53,886 WARN [BP-1946597163-172.31.2.10-1689159478370 heartbeating to localhost/127.0.0.1:42757] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1946597163-172.31.2.10-1689159478370 (Datanode Uuid b3713489-402d-4b1a-a017-e520575ddeaf) service to localhost/127.0.0.1:42757 2023-07-12 10:58:53,887 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/dfs/data/data1/current/BP-1946597163-172.31.2.10-1689159478370] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:53,887 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1eb23fe6-357d-c436-6f01-48950b99a497/cluster_88e0f84f-bfb4-0918-fd25-f5762e628808/dfs/data/data2/current/BP-1946597163-172.31.2.10-1689159478370] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:53,920 INFO [Listener at localhost/44831] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:54,043 INFO [Listener at localhost/44831] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 10:58:54,114 INFO [Listener at localhost/44831] hbase.HBaseTestingUtility(1293): Minicluster is down